title string | paper_decision string | review_1 string | rebuttals_1 string | review_2 string | rebuttals_2 string | review_3 string | rebuttals_3 string | review_4 string | rebuttals_4 string | global_rebuttals string | dataset_source string | conference_year int64 | review_5 string | rebuttals_5 string | review_6 string | rebuttals_6 string | review_7 string | rebuttals_7 string | review_8 string | rebuttals_8 string |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
A Tale of Two Features: Stable Diffusion Complements DINO for Zero-Shot Semantic Correspondence | Accept (poster) | Summary: This paper exploits Stable Diffusion (SD) features for semantic and dense correspondence tasks. The authors first conduct evaluations of SD features and found that SD features provide high-quality spatial information but sometimes inaccurate semantic matches. This paper further shows that such SD features are complementary to DINOv2 features which provide sparse but accurate matches. By fusing SD and DINOv2 features with a simple weighted concatenation strategy, this paper achieves significant performance gains over state-of-the-art methods on benchmark datasets, e.g., SPair-71k, PF-Pascal, and TSS.
Strengths: - **Extensive evaluation and analysis.** This paper conducts lots of experiments and visualizations to analyze the behavior of Stable Diffusion and DINOv2 features. The experiments are also well-organized, with a smooth logic flow, which makes it easy for readers to follow.
- **Strong zero-shot performance.** By simply combining the Stable Diffusion and DINOv2 features, this paper shows strong zero-shot performance of the semantic correspondence task, outperforming previous task-specific methods by a large margin.
- **The message shown in this paper could be interesting to the community.** Unlike previous methods that mostly focus on studying single image diffusion models, this paper studies the properties of diffusion model features across different images in the context of semantic correspondence. This paper shows that the publicly available Stable Diffusion and DINOv2 models contain rich information to solve the semantic correspondence task, even outperforming task-specific models in the zero-shot setting. This would be a strong message and might stimulate future work along this direction.
Weaknesses: It's unclear how the proposed method handles outliers in the correspondences. Since the correspondences are obtained with nearest neighbor search, which might be less robust for occlusion or out-of-frame points.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: - I am wondering what might lead to the different behaviors of Stable Diffusion features and DINOv2 features? A more in-depth discussion might make this paper stronger. Could the authors further comment on this?
- As also discussed in the limitation section, the spatial resolutions of Stable Diffusion features and DINOv2 features are both relative low, which might hurt the performance for fine-grained details. Besides Stable Diffusion, there are also other diffusion models that directly work on the original pixel space instead of the downsampled latent space, would these kind of diffusion models potentially alleviate this issue? I am not asking for such experiments, but just wondering the authors' opinion on this.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations: Yes, the limitations are discussed in the paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal:
**W1:** Handle outliers in the correspondences.
**A:** Our method primarily focuses on analyzing properties of each feature and their fusion, and thus we adopt a very simple matching method (i.e., NN search) without using any matching priors or templates. Despite this, our method achieves outperforming/competitive accuracy on benchmark datasets (Table 3 and 4 in the main paper). However, we believe certain methods that are designed specifically for outliers in the correspondence task (OmniMotion [1] for occlusion, and PWarpC [2] for unmatchable points) can benefit from our proposed fused features. We consider integrating these features into those systems or a novel fusion architecture which identifies occluded points or outliers to be an important future direction for our work.
------
**Q1:** Causes of the complementary properties of SD and DINOv2.
**A:** Please refer to the "General Response Q&A section".
------
**Q2:** Potential of pixel-space diffusion models for overcoming limitations in feature resolution.
**A:** While we agree that this seems like a promising direction, there may be a few challenges. Pixel-space diffusion models typically operate through a coarse-to-fine methodology. They start with an initial low-resolution, often 64x64, text-to-image diffusion process, followed by (several) subsequent upsample diffusion process(es). Extracting a feature map larger than 64x64 would necessitate relying on the upsample diffusion model. However, while the upsample diffusion models are crucial for image clarity, they do not necessarily need to comprehend the inherent structure of the image. As such, these models might contain less semantic information than the text-to-image diffusion models, potentially limiting its efficacy in semantic correspondence tasks.
Additionally, from our investigation of competitive diffusion models in the current landscape, it appears that none of them simultaneously meet the conditions of 1) being open-sourced, 2) operating within the pixel-space, and 3) offering feature maps with a resolution higher than 64x64. This makes it challenging for us to test the above idea, though it certainly is interesting.
|Model|FID on I-Net↓|Open-sourced|Pixel-space|Large f_map|
|-|:-:|:-:|:-:|:-:|
|GLIDE|12.24|||❌|
|DALLE2|10.39|❌|||
|SD|8.32||❌||
|Imagen|7.27|❌|||
|eDiff-I|6.95|❌|||
|ERNIE-ViLG2.0|6.75|❌|❌||
|RAPHAEL|6.61|❌|❌||
Table 1. Survey of competitive diffusion models.
As a potential fix for the spatial resolution limitation, one could train an additional projection layer. This layer, when trained on top of both the input image and its corresponding low-resolution features, might offer a bridge between spatial granularity and semantic depth. We consider this a promising direction for future exploration.
-----
**References**
[1] Tracking Everything Everywhere All at Once, Wang et al. 2023
[2] Probabilistic Warp Consistency for Weakly-Supervised Semantic Correspondences, Truong et al. 2022
---
Rebuttal Comment 1.1:
Comment: Thanks the authors for the detailed response, and I am happy to increase my rating to Accept.
---
Reply to Comment 1.1.1:
Comment: Thank you for your feedback. We appreciate your comments and will further refine our paper. | Summary: This paper, for the first time, proposes a novel method to fuse Stable Diffusion features and DINOv2 features to obtain robust feature representation that readily surpasses the SOTA semantic correspondence work without further training, rather simply adopting Winner Takes All yields SOTA correspondence performance. Interestingly, with such correspondences, high-quality object swapping is also enabled without task-specific fine-tuning.
Strengths: 1. Novel idea, approach and thorough analysis.
2. Motivation is really good and the paper was easy to read and understand.
3. Impressive performance.
4. Good application (swapping)
Weaknesses: 1. This is more like a question rather than a weakness. In figure 1, the visualization shows that a cat given as a source image, the proposed approach finds dense "correspondence" to dog, horse and motorcycle. Is this really a correspondence? To me, it feels like this is more like foreground segmentation. From the colors of the features, iI could tell that the facial part of cat is matched to the frontal part of motorcycle and facial part of other animals. This is very interesting. Why do you think this visualization is obtained even though strictly speaking, there should not be correspondences?
2. Why is it that on SPair-71k, the proposed method yields leading results, but not for PF-PASCAL and TSS? As the authors state, PF-PASCAL and TSS are less challenging datasets, but results seem to show the opposite.
3. Missing baselines : for supervised ones, CATs++ (extension of CATs) and IFCAT from the same author achieve better performance. Also, PWarpC is a work that adopts unsupervised learning for semantic correspondence. I would suggest including this as well. It is better to compare with them as well. I don't think it would be a weakness even if the proposed method does not attain SOTA, since the contribution lies more on the fusion of features and their analysis.
4. In implementation detail, it says the input resolution is 960 x 960. What was the evaluation keypoint annotation resolution? The input resolution, evaluation resolution and many other factors related to resolutions have high influence on performance in this task. This is addressed by PwarpC (CVPR'22) and CATs++ (TPAMI). So if the evaluation is performed at higher resolution than other works, the comparison may not be fair at all. This needs to be clarified.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: See the weaknesses above. If they are adequately addressed in the rebuttal, I think this paper will be a very strong paper. My rating is solely based on the performance part, which if clarified, then I am happy to increase my rating to accept this paper.
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Limitations are adequately discussed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **W1**: Clarity on the visualization of Fig. 1.
**A**: We provide the challenging cross-category semantic correspondence that matches semantically related or geometrically similar parts across different object categories or even domains. The Neural Best-Buddies [1] and DINOv2 paper also visualizes those examples.
As in Fig. 1, our method matches objects even at different poses, suggesting that diffusion features contain information related to semantic/pose/shape. This is more information than needed for foreground segmentation. We hypothesize that diffusion features contain high-level semantic and structural information which can relate conceptually different objects eg, cats and motorcycles.
---
**W2**: Clarity on the performance of PF-Pascal and TSS datasets.
**A:** Our method achieves leading results on PF-Pascal and TSS datasets, specifically:
- **The best results on PF-Pascal dataset among the *unsupervised* methods**, with 49.17%, 18.62%, and 6.46% relative improvement on PCK@0.05, 0.10, and 0.15, respectively (see Table 1 below). Additionally, if we use a simple form of supervision for our fused features (by learning a single projection layer with the PF-Pascal training set), **our method performs similarly to other supervised methods on PF-Pascal**. If we were to fine-tune the backbone networks (SD or DINOv2) like other supervised approaches, we would expect to see even bigger improvements.
- **The best results on TSS among *unsupervised nearest neighbors* methods** and is near SOTA for *unsupervised* methods including those with task specific networks and losses (see Table 2 below). While Semantic-GLU-Net performs better on TSS, it is important to note that Semantic-GLU-Net significantly underperforms our fused features on SPair-71k (23.5 vs our 62.9 for PCK@0.1) and PF-Pascal (72.5 vs our 86.0 for PCK@0.1).
|Technique|Method|PCK@0.15|PCK@0.10|PCK@0.05|
|-|:-|:-:|:-:|:-:|
|Unsupervised|CNN-Geo [43]|41.0|69.5|80.4|
||Glu-Net [57]|42.2|69.1|83.1|
||Semantic-Glu-Net [60]|48.3|72.5|85.1|
||Stable Diffusion (**Ours**)|*63.2*|*77.7*|*86.3*|
||Fuse-ViT-B/14 (**Ours**)|**72.1**|**86.0**|**90.6**|
||
|Supervised|PWarpC-NC-Net|79.2|92.1|95.6|
||CATs++* |*84.9*|*93.8*|96.8|
||IFCAT*|**88.0**|**94.8**|**97.9**|
||Fuse-ViT-B/14 trained projection layer (**Ours**)|80.9|93.6|*96.9*|
Table 1. PF-Pascal Performance. * denotes finetuned backbones. In a category, **bold** is best, *italics* is second best.
|Technique|Method|FG3DCar|JODS|Pascal|Avg.|
|-|-|:-:|:-:|:-:|:-:|
|Unsupervised TS|CNN-Geo [43]|90.1|*76.4*|56.3|74.4|
||PARN [23]|89.5|75.9|*71.2*|78.8|
||GLU-Net [57]|93.2|73.3|71.1|79.2|
||Semantic-GLU-Net [60]|**95.3**|**82.2**|**78.2**|**85.2**|
||
|Unsupervised NN|DINOv2|82.8|73.9|53.9|72.0|
||Stable Diffusion (**Ours**)|93.9|69.4|57.7|77.7|
||Fuse-ViT-B/14 (**Ours**)|*94.3*|73.2|60.9|*79.7*|
Table 2. TSS Performance. TS is task-specific, NN is nearest-neighbors. **Bold** is best, *italics* is second best. Fuse-ViT-B/14 (**Ours**) performs best among Unsupervised NN methods and second best among all Unsupervised methods (see note above regarding Semantic-GLU-Net).
**With regards to the dataset difficulty**, weaker methods can often perform better on easier datasets by exploiting specific biases or limitations. Specifically,
- **TSS** contains a limited number of object classes mostly under rigid transforms. Having a template-based parametric approach [2] can achieve good performance on TSS.
- **PF-Pascal** contains image pairs with the same viewpoint, and does not require an intrinsic understanding of the instance's structure and orientation.
- **SPair-71k**, in contrast, contains image pairs with more diverse viewpoint, scale, occlusion, and truncation than PF-PASCAL and TSS, is known to be more challenging and thus more informative as an evaluation.
That said, **achieving strong performance on SPair-71k is particularly commendable**. Our zero-shot setting already achieves comparable performance with the supervised SOTA on SPair-71k. With a simple form of supervision to our fused features, our method surpasses supervised SOTA (ours 78.2 vs. IFCAT* 64.4).
---
**W3**: Additional baselines.
**A:** We include PWarpC-CATs in Table 4 in the main paper but didn’t include its weakly-supervised variations due to space limit. Please see Tab. 1 in R2-fsqe's W1 response, for a comparison with CATs++, IFCAT, and PWarpC; this show that we still attain SOTA in the challenging SPair-71k datasets under supervised setting. We will include these baselines in the revised version.
---
**W4**: Effect of the keypoint annotation resolution and the input image resolution.
**A:** Our evaluation keypoint annotation resolution is 840, employed for all NN-based methods. Table 3 provides the PCK results for SPair-71k 20-samples subsets at different input resolutions and annotation resolutions (the min input resolution is restricted by the SD model). Similar to PWarpC's Table 1, using different annotation resolutions only marginally affects the accuracy. On the other hand, as in CATs++, we also observe that input image's resolution matters more, especially under stricter constraints (PCK@0.05 and 0.01).
|Input Image Reso.|Annotation Reso.|0.10|0.05|0.01|
|-|-|:-:|:-:|:-:|
|960|840|63.28|47.61|8.32|
|(feat map 60*60)|ori.|62.80|47.36|8.11|
||256|62.54|47.58|8.36|
||
|512|840|61.66|40.73|4.58|
|(feat map 32*32)|ori.|61.58|40.43|4.28|
||256|61.51|40.35|4.40|
Table 3. PCK performance of fuse features under different input image resolution and keypoint annotation resolution on SPair-71k.
Comparing with CATs++ under the same settings (512 input and 256 annotation resolution), our method still performs better, notably in PCK@0.10, with ours 61.51 vs theirs 59.9 (number taken from their Fig. 12a).
---
**References**
[1] Neural Best-Buddies: Sparse Cross-Domain Correspondence, Aberman et al. 2018
[2] GLU-Net: Global-Local Universal Network for Dense Flow and Correspondences, Truong et al. 2020
---
Rebuttal Comment 1.1:
Comment: Thanks for the thorough responses.
As I don't find any other major concerns, I will increase the rating.
---
Reply to Comment 1.1.1:
Comment: Thank you! We appreciate your comments and will revise this paper accordingly. | Summary: The paper explores the use of Stable Diffusion (SD) features for dense correspondence. The authors investigate the potential of SD and DINOv2 features and show some complementarity. SD features provide high-quality spatial information but sometimes inaccurate semantic matches while DINOv2 features offer sparse but accurate matches. The authors show that averaging the features from the two models might achieve strong performances for dense correspondence. The fused features are evaluated using a zero-shot approach, where no specialized training is performed for the correspondence task, and nearest neighbors are used for evaluation. Surprisingly, the fused features outperform state-of-the-art methods on benchmark datasets like SPair-71k, PF-Pascal, and TSS. The authors also demonstrate that these correspondences enable interesting applications such as instance swapping between images while preserving object identities.
Strengths: **In-depth Qualitative Analysis** The paper conducts a detailed qualitative analysis of the Stable Diffusion (SD) features and DINOv2 features, shedding light on their respective strengths and weaknesses. This analysis provides valuable insights into the semantic and texture relevance of these features, highlighting their distinct properties.
**Extensive Experiments** The paper presents extensive experimental results on benchmark datasets, including SPair-71k, PF-Pascal, and TSS. The authors report significant gains compared to SOTA methods.
**Application of Instance Swapping** The paper well illustrate the complementarity of the feature on this task.
Weaknesses: **Lack of Strong Quantitative Assessments** While the paper provides an in-depth qualitative analysis of the complementarity between SD and DINOv2 features, it lacks strong quantitative assessments to support these claims. The true added value of the paper should have been a robust quantitative evaluation showcasing in particular the non-redundancy of the SD and DINOv2 features. This absence limits the overall impact and credibility of the proposed approach.
**Incomplete Figure 2 ?** There appears to be an issue with Figure 2, as the bottom illustration is missing despite being mentioned in the legend.
**Lack of Clarity in PCA Aggregation** The authors mention the use of PCA for aggregating SD features, but the explanation provided is not clear enough. The authors mention computing PCA across the pair of images for each layer of the feature map, followed by upsampling features from different layers to the same resolution to form the ensembled feature. I however have trouble understanding clearly what is the workflow here.
Technical Quality: 1 poor
Clarity: 2 fair
Questions for Authors: To provide a fair evaluation, it would be interesting to have the performances of other fused features based on DINO varaitions such as DINOv1 + SD or EsViT[1] + SD.
[1] Chunyuan Li et al. “Efficient Self-supervised Vision Transformers for Representation Learning”, ICLR22
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 1 poor
Presentation: 2 fair
Contribution: 2 fair
Limitations: The authors underline the technical limitation of their work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal:
**W1**: Quantitative assessments on the non-redundancy of SD and DINOv2 features.
**A**: We include several quantitative evaluations that underscore the non-redundancy and distinctiveness of SD and DINOv2 features. Specifically, we provide: *1) quantitative error analysis on fused and individual features; 2) evidence of non-redundancy across datasets; 3) smoothness analysis on the TSS flow fields; and 4) the correspondence performance of the fused and independent features.* Details from these evaluations are provided as follows:
1. **Quantitative error analysis on fused and individual features:** In **Supplemental A.7**, we provide an in-depth quantitative analysis on feature fusion. It includes:
- Distribution of outcomes, considering the performance of SD, DINOv2, and fused features -- showing the considerable proportion of cases where only one features succeeds;
- A detailed discussion on the conditional probability of the performance of fused features under different scenarios of SD and DINOv2 -- showing fusion can helps rectify when one or both features fail;
- A relative distance analysis in certain senarios -- further sheding light on how the fused features helps under certain circumstances.
2. **Non-redundancy across datasets and PCK levels:** We present additional evidence of the non-redundancy of SD and DINOv2 features in Table 1, which details the error distribution for these two features on SPair-71k and PF-Pascal benchmarks at three different PCK levels. As shown in this table, in 20~30% of total cases under most settings, one feature succeeds where the other fails; this suggests that they have a substantial amount of non-redundant information.
|||SD, DINO fails|SD fails, DINO correct|SD correct, DINO fails|SD, DINO correct|
|-|-|:-:|:-:|:-:|:-:|
|SPair-71k|PCK@0.15|21.68|15.70|13.95|48.67|
||PCK@0.10|29.20|15.76|15.33|39.66|
||PCK@0.05|44.50|14.20|15.81|25.50|
||
|PF-Pascal|PCK@0.15|5.60|8.27|11.12|75.01|
||PCK@0.10|9.99|9.68|12.72|67.62|
||PCK@0.05|27.07|11.98|16.78|44.17|
Table 1. Distribution of outcomes under different datasets and PCK levels.
3. **Smoothness analysis on the TSS flow fields:** Table 2 in the main paper offers a quantitative analysis of smoothness on computed flow fields and suggests that SD features produce dense correspondences which are much smoother than those from DINOv2 features; this strongly suggests that SD features contain more spatial information than DINO features.
4. **Correspondence performance of fused and individual features:** In Table 3 of the main paper, the improved quantitative performance of our fusion results shows that these features have non-redundant elements (otherwise fusing them would not provide a quantitative improvement). Our fusion result leads to a 13% improvement on SPair over either individual feature. Additionally, Table 3 reports the performance of different features on per-category subsets, with accompanying analysis in Section 4.1. In particular, several categories demonstrate that DINO and SD have drastically different performance. Aero, Bike, Boat, Car, Dog, Horse, Motor, Person, Sheep all perform better with DINO features, while Cow, Plant, Train, TV perform better with SD features. Specifically, the categories that SD performs better on tend to have less texture signal (e.g. TV, Plant).
We also provide qualitative analysis to support our claim in: 1) Section 3.3 which showcases extensive qualitative experiments highlighting multiple distinctions between the two features; 2) Supplemental B.1 where we provide additional visual results to showcase the complementary.
We will highlight the quantitative analysis more clearly in the updated main paper.
------
**W2**: Clarity on the grouping of Figure 2.
**A:** We divide Figure 2 into two parts with a dotted line. The top two rows show the visualization of PCA-computed features and the bottom two rows show K-Mean clustered features. We will make the figure more clear in an updated version.
------
**W3**: Clarity on the details of PCA aggregation.
**A:** Here is a more detailed explanation on how to aggregate SD features using PCA.
We first extract the $i^{th}$ layer’s features for source and target images, $\{f^s\_i\}\_{i\in[2,5,8]}$ and $\{f^t\_i\}\_{i\in[2,5,8]}$. Next, we concatenate each layer’s source feature and target feature and compute PCA together: $\{\tilde f^s\_i, \tilde f^t\_i = PCA(f^s\_i || f^t\_i)\}\_{i\in[2,5,8]}$. Then we gather each layer’s dimension-reduced features $\{\tilde f^s\_i\}\_{i\in[2,5,8]}$ and $\{\tilde f^t\_i\}\_{i\in[2,5,8]}$, and upsample them to the same resolution to form the final SD feature $\tilde f^s$ and $\tilde f^t$.
We will include these details in the main paper.
------
**Q1**: Performance of other DINO variations.
**A:** The pre-trained EsViT model checkpoint is currently inaccessible due to public access restrictions (issue #27 in the EsViT GitHub repo). Thus, we tried other DINO variations - iBOT [1], and DINOv1. As shown in Table 2, both DINOv1 and iBOT perform significantly worse than DINOv2. Surprisingly, a zero-shot fusion with SD slightly decreases the overall performance. We hypothesize that this may be due to the relatively weak performance of DINOv1 and iBOT; if these features are strictly worse than SD features, they may only contribute noise to the zero-shot fusion results. A learned projection would enable fusion to ignore features that decrease the overall performance and at least perform similarly to SD by itself.
|Method|PCK@0.10|PCK@0.05|PCK@0.01|
|-|:-:|:-:|:-:|
|DINOv1-vitb16|33.17|19.93|2.43|
|iBOT-vitb16|38.85|23.90|2.63|
|DINOv2-vitb14|55.15|39.66|6.12|
||
|Stable Diffusion|56.18|42.80|6.79|
|Fuse-DINOv1-vitb16|51.79|37.50|5.34|
|Fuse-iBOT-vitb16|55.11|38.99|4.85|
|Fuse-DINOv2-vitb14|63.28|47.61|8.32|
Table 2. Comparison with DINOv2 and two other variants on SPair-71k.
------
**References**
[1] iBOT: Image BERT Pre-Training with Online Tokenizer, Zhou et al. 2022
---
Rebuttal Comment 1.1:
Comment: Dear Reviewer,
As we near the midpoint of our discussion period, we would like to confirm whether we have successfully addressed the raised concerns in your review. Should any lingering issues require further attention, please let us know at your earliest convenience. This will enable us to address them promptly.
We appreciate your time and effort in enhancing the quality of our manuscript.
Thank you.
---
Rebuttal 2:
Title: Please comment on the author rebutttal
Comment: Dear reviewer fh35,
Could you please let us know your reaction to this author response.
Thanks,
The Senior AC for this paper.
---
Rebuttal Comment 2.1:
Comment: Thank you for your response, I am satisfied by the answers to my and other reviewers' concerns. I will thus increase my score.
---
Reply to Comment 2.1.1:
Comment: Thank you for your constructive feedback, and we're pleased to hear that our responses have addressed your concerns. However, we haven't noticed the score update as mentioned in your comment. Could you kindly adjust your original review rating so it would be easier for the ACs to make the final recommendation? Thank you! | Summary: This paper proposes to study the effectiveness of features extracted from Stable Diffusion for dense correspondences. The extracted features are compared to that of DINOv2 and shown to be complementary. A very simple fusion scheme is then proposed and evaluation on datasets for sparse and dense correspondences as well as on instance swapping with convincing results.
---
The rebuttal is convincing, I maintain my initial rating leaning towards acceptance.
Strengths: With all the effort that has been poured recently in diffusion models, this empirical study regarding their use outside of generative nice images is refreshing, especially since it tackle a very low level problem (keypoint correspondences) that is somewhat very remote from image generation and yet surprisingly close since the model has to generate coherent local structures. The study shows that indeed, diffusion model features are very useful for such problems and thus sheds light on what one can expect from the latent space structure of SD.
Weaknesses: - All the results are performed in zero-shot, which is a bit limiting. I am not familiar with the recent literature on correspondence problems, but I assume that a fine-tuning of the features (at the very least of the projection) is possible to see how much we can get from these models.
- All the experiments were made with stable diffusion 1.5. It would be interesting to test another model to see if the same results hold across architecture and training change and thus if it is a generic property of diffusion models.
- The fact that the best results are obtained using 2 really big models is a bit annoying. It means it is difficult to know if the results come from the nature of the methods employed (SSL and diffusion) or just from the sheer capacity of the models employed.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: - The timestep at which features are extracted is not clearly mentioned (is it t=100 at line 214, in that case over a total time length of how long?). What influence influence does it have? Are the features more meaning closer to the denoised image ?
- The swapping are impressive, but they are refined by running the diffusion process after wise. It would be great to see compositing before the diffusion process to evaluate the quality of the correspondences.
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The paper discusses 2 limiations: namely the low resolution of the features and the computational budget to get features from a big diffusion model. The first one is probably the bigger concern for correspondences.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **W1:** Effect of fine-tuning the features with the projection layer.
**A:** We briefly explored a supervised adaptation by training a projection layer [1] on top of the extracted features, guided by the CLIP-style symmetric cross entropy loss with respect to corresponding keypoints. As in Table 1, this approach yields marked improvements across both SPair-71k and PF-Pascal datasets. Notably, the fused features consistently outperform individual SD or DINOv2 features, even under the supervised setting.
|Technique|Method||SPair-71k|||PF-Pascal||
|-|-|:-:|:-:|:-:|:-:|:-:|:-:|
|||PCK@0.01|PCK@0.05|PCK@0.1|PCK@0.05|PCK@0.1|PCK@0.15|
|NN-based|DINOv2|6.1|39.7|55.2|61.1|77.3|83.3|
||SD|6.8|42.8|56.2|63.2|77.7|84.3|
||Fuse|8.3|47.6|63.3|72.1|86.0|90.6|
||
|Projection Layer|DINOv2|8.6|56.4|74.1|74.2|90.8|95.4|
||SD|10.0|56.0|71.1|77.4|89.7|93.9|
||Fuse|**10.7**|**62.5**|**78.2**|**80.9**|**93.6**|**96.9**|
||
|Supervised Baselines|PWarpC-NC-Net|-|31.6|52.0|79.2|92.1|95.6|
|(*: fine-tuned backbone)|CATs++*|-|-|*59.8*|*84.9*|*93.8*|*96.8*|
||IFCAT*|-|-|*64.4*|*88.0*|*94.8*|*97.9*|
Table 1. Comparison between NN-based methods, fine-tuning a projection layer, and other supervised baselines.
We will include these findings in the paper, which underscores the potential of our simple fusion strategy that already yields SOTA results in an unsupervised setup.
------
**W2:** Test other SD models with varying architectures and training settings.
**A:** We further tested two alternative variations of the SD model with the same architecture but different training settings, SD-1-3 and SD-2-1-base. We also explored two work-in-progress distilled SD architectures, SD-tiny and SD-small [2] (models released by Segmind Inc.), which have 45% and 65% fewer parameters than the base model.
Table 2 reports each model’s PCK metric on the SPair-71k dataset. All SD base models exhibit similar performances for both individual and fused features.
Despite their slight performance drops, the distilled SD-tiny and SD-small variants yield noticeable improvements when fused with DINOv2. We hope to expand this analysis further to other models in future work, e.g. Pixel diffusion (Imagen [3]), token-based models (Muse [4]), once they are publicly available.
|Method|PCK@0.10|PCK@0.05|PCK@0.01|
|-|:-:|:-:|:-:|
|SD-tiny|41.07|28.67|5.21|
|SD-small|51.05|38.33|6.28|
|SD-1-3|55.30|42.72|**7.72**|
|SD-1-5|**55.90**|**42.76**|7.01|
|SD-2-1-base|54.43|41.68|7.19|
||
|DINOv2-vitb14|55.15|39.66|6.12|
|Fuse-vitb-tiny|56.96|42.35|7.27|
|Fuse-vitb-small|60.36|45.08|7.95|
|Fuse-vitb-1-3|**62.69**|**47.09**|8.25|
|Fuse-vitb-1-5|62.61|46.60|**8.47**|
|Fuse-vitb-2-1-base|62.22|46.10|8.40|
Table 2. Comparison of different variants of SD models on SPair-71k, we use explicit captioner for fair comparison.
------
**W3:** Ablate on the capacity of the models employed.
**A:** To ablate the model capacity, we explore the use of smaller, distilled versions. In particular:
- We explored DINOv2-vits14, which has about 25% network parameters of the base DINOv2 model, As in Table 3, this substantially smaller variant still delivers comparable results to the base model. This suggests that while capacity plays a role, the core techniques remain effective even with a significantly smaller model.
|Method|PCK@0.10|PCK@0.05|PCK@0.01|
|-|:-:|:-:|:-:|
|DINOv2-vits14|53.28|37.20|5.86|
|DINOv2-vitb14|55.15|39.66|6.12|
|Fuse-DINOv2-vits14|61.34|46.57|7.84|
|Fuse-DINOv2-vitb14|63.28|47.61|8.32|
Table 3. Comparison of DINOv2-vit small and base model on SPair-71k.
- In Table 2, we explored the smaller SD-tiny and SD-small variants. While these models maintain similar properties as the base, they do perform worse overall (Table 2). The performance drop can be attributed to the "in progress" nature of these distilled models as well as a decrease in capacity. In general, the fusion results indicate that the DINO and SD features are complementary even when the capacity of the individual networks is changed.
------
**Q1:** Clarity and effect of the timesteps.
**A:** Our method extracts features at the time step 100 of 1000. As shown in the Table 1 of the R1-Pbqb's Q3 response, feature extraction at different time steps doesn’t significantly affect the accuracy overall. We find the time step 100 to be optimal by searching it on the validation set.
ODISE finds that t=0 yields optimal results. This would be the case for semantic segmentation where a denoised image with clear object boundaries is critical for the accuracy. However, for semantic correspondence where semantic information is also important, feature maps with more structural information at a little bit earlier denoising step may help better.
------
**Q2:** Ablate on the refinement process for swapping.
**A:** Section A.5 and Table 4 in Supplementals provide quantitative comparison on the diffusion-based refinement process. The refinement step both improves the Quality score and the CLIP score but hurts the FID scores, which is possibly due to certain artifacts that amplify the discrepancy with the distribution of real images during the refinement stage.
Section B.2 in Supplementals further provides qualitative examples that the refinement process visually smooths local textures from warping results.
-----
**References**
[1] ODISE: Open-Vocabulary Panoptic Segmentation with Text-to-Image Diffusion Models, Xu et al, 2023
[2] On Architectural Compression of Text-to-Image Diffusion Models, Kim et al. 2023
[3] Photorealistic Text-to-Image Diffusion Models with Deep Language Understanding, Saharia et al. 2022
[4] Muse: Text-To-Image Generation via Masked Generative Transformers, Chang et al. 2023 | Rebuttal 1:
Rebuttal: ## Acknowledgements
We thank the reviewers for the comments, extensive feedback, and recognition of the strengths of our work:
- **Comprehensive Analysis:** Our "extensive evaluation and analysis" of Stable Diffusion (SD) features and DINOv2 features provide "valuable insights" into their distinct properties and potential uses, as recognized by the reviewers (R3-fh35, R5-YopU).
- **Exceptional Performance:** The "impressive performance" of our method in the zero-shot setting, as well as its substantial gains over existing supervised methods, were highlighted by the reviewers (R1-Pbqb, R3-fh35, R4-NMVR, R5-YopU).
- **Practical Applications:** Our successful demonstration of instance swapping illustrated the real-world applicability of our method, as noted by the reviewers (R1-Pbqb, R3-fh35, R4-NMVR).
- **Clarity of Presentation:** The reviewers praised the "smooth logic flow" of our paper and the clarity of our explanations, noting that it was well-written and "easy to read and understand" (R1-Pbqb, R4-NMVR, R5-YopU).
- **Inspiration:** The reviewers commended our work as well-inspired, reflecting the successful combination of Stable Diffusion and DINOv2 features for semantic correspondence tasks, providing a refreshing perspective on the applications of diffusion models outside of image generation (R1-Pbqb, R2-fsqe, R3-fh35, R5-YopU).
We first address common questions by the reviewers, and then respond to individual inquiries.
------
## Questions & Responses
**Q:** (R1-Pbqb, R5-YopU) Why do Stable Diffusion and DINOv2 features behave differently?
**A:** Due to resource limitations (e.g., time, data, and computation resources), we focus primarily on the empirical study of existing models. Unfortunately, experimentally determining the causes for feature properties often requires clean ablations with respect to data, training schemes, and architecture, and we do not have the resources to do these ablations. That said, we have several conceptual hypotheses to explain the observed differences that SD has better spatial information but worse accuracy than DINOv2:
- **Training Paradigms:** DINO's self-supervised learning training works by taking an image and producing global and local views; the global views are passed to a teacher network and used to distill information into a student's representation of both the global and local views. While this encourages "local-to-global" features [1], it also has the side effect of encouraging the features to be invariant to the training augmentations used to generate the different views (e.g., color jittering, Gaussian blur, solarization, multi-crop, and horizontal flips). **This invariance in DINO decreases the network's ability to differentiate spatial information**, e.g., the left side of an object to its right side, **particularly for symmetrical objects or objects without texture signals** (such as the bus in the bottom of Figure 3).
**On the other hand, the SD model has been trained for text-to-image generation, which requires the model to be aware of both global object structure as well as local shape and texture cues**; for example, since the model has been trained to generate both photos and sketches of dogs, the model should be able to understand where a dog's tail is in relation to its head, even with minimal texture signal.
- **Architecture Differences:** The ViT employed by DINOv2 processes the image as a stack of patches; while the positional encoding preserves a sense of spatial awareness, attention is computed globally and less emphasis may be placed on local structure. In contrast, the convolution layers in SD’s UNet may retain more details and enhance the retention of spatial information.
Despite the lack of clean ablations, there are a few experimental data points that we can consider:
- **Model Capacity:** In this rebuttal (R2-fsqe-W2 and R2-fsqe-W3), we experiment with lower capacity versions of SD and DINOv2. The smaller distilled versions of SD have the same properties as base SD and fusing these smaller models with DINOv2 still result in improved performance on SPair-71k. The same is true of lower capacity versions of DINOv2. This suggests that capacity is not the sole reason for these features.
- **Training schemes:** We investigated models with identical architectures but diverse training protocols (R2-fsqe-W2, different variants of SD). Specifically, the SD-1-5 model is fine-tuned using different steps and datasets in comparison to SD-1-3 (195,000 steps on "laion-aesthetics v2 5+" versus 595,000 steps on "laion-improved-aesthetics"). This is also distinct from the SD-2-1-base model, which is trained on the filtered LAION-5B dataset. Despite these variations, all three models demonstrate comparable performance, indicating that these properties are robust to small variations in training schemes and datasets.
In general, we think that trying to identify specific causes and provide evidence is a very interesting question and a great direction for future work.
[1] Emerging Properties in Self-Supervised Vision Transformers, Caron et al. 2021. | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: The paper proposes using features extracted from a stable diffusion model for dense image correspondence tasks. The paper further proposes using DINO features along with stable diffusion features for the task and empirically shows that the combination has a complementary effect--specifically SD features have good spatial localization and are smooth, whereas DINO features are sparse but accurate. Experiments are done on several datasets and the proposed feature extraction technique improves drastically over supervised, unsupervised, and weakly supervised baselines.
Strengths: 1. The paper is well-written, adequately inspired, and well-executed.
2. Experiments depict the importance of these large pre-trained models for the task of correspondence learning in images. Specifically, the large gains over supervised methods are quite surprising.
3. Instance swapping application clearly demonstrates the ability to do accurate correspondences between different instances of the same category of objects.
Weaknesses: 1. The use of pre-trained features from large models is a well-explored area of research [1,2,3]. These pre-trained models are known to perform well on downstream tasks such as semantic segmentation, detection, and classification. It is unsurprising that these features are also useful for correspondence learning.
2. Though the authors show limitations of the features. It should have been focussed more on the paper. Specifically categorizing the mistakes made by the approach and possible ways to alleviate the problems. This will be a good contribution to the community as it informs when to avoid using the proposed approach.
[1] _Unleashing text-to-image diffusion models for visual perception._ Wenliang Zhao, Yongming Rao, Zuyan Liu, Benlin Liu, Jie Zhou, and Jiwen Lu. 2023
[2] _Emerging properties in self-supervised vision transformers._ Mathilde Caron, Hugo Touvron, Ishan Misra, Hervé Jégou, Julien Mairal, Piotr Bojanowski, and Armand Joulin. 2021
[3] Open-vocabulary panoptic segmentation with text-to-image diffusion models. Jiarui Xu, Sifei Liu, Arash Vahdat, Wonmin Byeon, Xiaolong Wang, and Shalini De Mello. 2023
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: 1. Though the motivation for using these features is clear, the paper doesn't provide reasons for the complementary properties of the features from DINO and SD. It would be interesting to analyze what specifically gives rise to these complementary properties. Is it because of the training objective, architecture, datasets, etc?
2. Second and fourth rows of the right side of Figure 3 can be explained more for clarity. Specifically, if the figure also includes the original images, it will make more sense.
3. Did the authors experiment with giving different textual inputs to SD? Does including object categories in the textual categories improve the performance of correspondences?
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 3 good
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **W1:** Leveraging SD / DINO features for correspondence learning.
**A:** While [1,3] demonstrate that SD features are useful for depth and semantic segmentation and [2] that DINO features are useful for image retrieval and semantic segmentation, less work has focused on the tasks of semantic and dense correspondences: [4] explores using DINOv1 for these tasks but we are unaware of any published work which explores SD or DINOv2 features for correspondences.
Unlike semantic segmentation or detection which are defined on a single image, **the correspondence task requires that different objects have similar representations across different images** with potentially different lighting or camera intrinsics. In addition, dense correspondence quality tends to improve when the features accurately encodes lower level details like textures, unlike segmentation or detection which benefits from features with more high-level semantics. Also different from [1], we show that SD features are surprisingly useful for correspondences in a zero-shot setting, meaning that these properties are already present in the features and do not require additional fine-tuning or processing.
In addition to demonstrating that Stable Diffusion features are useful for correspondences, **our other main contribution is an in-depth analysis of the strengths and weaknesses of SD versus DINO features for correspondence**, and demonstrating that these features are surprisingly complementary. While these two features perform roughly equally, their fusion can lead to SOTA results on several correspondence benchmarks, even in a zero-shot setting with no training.
------
**W2:** Limitations of the approach and potential solutions.
**A:** Fig. 7 of the Supplementals show representative failures of the fusion feature, especially on matching tiny objects due to the low resolution of feature maps. We have explored different techniques for improving the resolution of the features (e.g. resizing the input or different projection layers). Our method also sometimes struggles on symmetric objects, such as Airplane image in the Pixel Warping tab in index.html, included in the Appendix. This can be overcome by exploiting geometric priors during the matching stage.
Our paper primarily focuses on the unsupervised, zero shot setting at test time, which allows us to better understand the information present in the features as is. For optimal performance, we show in the rebuttal that these results can be improved through training a projection layer on top of these features (please refer to Table 1 in R2-fsqe's W1 response). We expect that even better performance can be achieved by finetuning a larger post processing network after the fusion features or just fine-tuning the existing backbone feature networks (DINOv2 and SD) in a supervised setup.
We will include this discussion in the main paper.
------
**Q1:** Causes of the complementary properties.
**A:** Please refer to the General Response Q&A.
------
**Q2:** Clarity on Figure 3.
**A:** For the right side of Figure 3, the original image is in *the first column of the first and third rows*, and the target image is in the first column of the second and fourth rows. In the current paper, the original image has color coding to indicate the colormap for correspondences. We will also include the original image without the color coding in the updated version.
------
**Q3:** Effect of explicit textual descriptions.
**A:** The table below reports the PCK@0.10 performance of Diffusion features and Fused features in the SPair-71k 20-samples subset, when implicit and explicit textural (specifically, “a photo of {category}”) inputs are given. Overall, there are only marginal differences. The explicit textual inputs help in earlier steps (200 steps), while implicit captioner helps in denoised images. We conjecture that this is due to the implicit captioner from ODISE [1] being trained with timestep=0.
|Method|Captioner|0|50|100|150|200|
|-|-|:-:|:-:|:-:|:-:|:-:|
|SD|Implicit|**54.93**|**55.67**|***56.18***|55.11|55.11|
||Explicit|53.58|55.63|*55.90*|**55.45**|**55.15**|
||||||||
|Fuse|Implicit|**63.25**|**63.10**|***63.28***|62.46|62.50|
||Explicit|62.20|62.50|62.61|**62.72**|***63.32***|
Table 1. The PCK performance on SPair-71k for both implicit and explicit captioner under different timesteps. Best results between captioner are **bold**, best results among different timesteps are *italicized*.
-----
**References**
[1] Unleashing text-to-image diffusion models for visual perception, Zhao et al. 2023
[2] Emerging properties in self-supervised vision transformers, Caron et al. 2021
[3] Open-vocabulary panoptic segmentation with text-to-image diffusion models, Xu et al. 2023
[4] Deep ViT Features as Dense Visual Descriptors, Amir et al. 2021
---
Rebuttal Comment 1.1:
Comment: Thank you for your response. I am satisfied by the rebuttal. I have increased the score to 7.
---
Reply to Comment 1.1.1:
Comment: Thank you for the comments! We will further refine our paper accordingly. | null | null | null | null | null | null |
A Single 2D Pose with Context is Worth Hundreds for 3D Human Pose Estimation | Accept (poster) | Summary: This paper proposes a context-aware method that combines the 2D image feature and the detected 2D key points for 3D human pose estimation. The joint-centric spatial context represented by the intermediate image feature is able to reduce the amguity in 3D lifting. Under this motivation, a novel framework that consists of a defrmable context extraction module, a pose-context feature fusion module and a spatial inter-joint modeling module is proposed. Experiment validates the effectiveness of the proposed modules in reducing calculation time and achieving better performance. On two standard benchmarks, the proposed method shows its effectiveness and stability without using any temporal information compared to other single-frame methods and multi-frame methods.
Strengths: 1. This paper is well presented and organized. The idea is clear and the method is somewhat novel.
2. New SOTA results are achieved with a significante improvement even compared to most multi-frame methods.
3. Experiments are well organized and the ablation study is insightful to support the claim of contributions.
Weaknesses: 1. Leveraging 2D intermediate feature is not a new idea. Some previous works also tried this route, such as [a][b][c]. Compared to other skeleton-based methods, either single-frame-based or multi-frame-based, utilizting 2D image feature will lead to more memory consumption. Such a kind of limitation needs to be considered.
[a] Yin, Binyi, et al. "Context-Aware Network for 3D Human Pose Estimation from Monocular RGB Image." 2019 International Joint Conference on Neural Networks (IJCNN). IEEE, 2019.
[b] Habibie, Ikhsanul, et al. "In the wild human pose estimation using explicit 2d features and intermediate 3d representations." Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2019.
[c] Sun, Xiao, et al. "Integral human pose regression." Proceedings of the European conference on computer vision (ECCV). 2018.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: 1. According to the study in Sec. 3 in the supplementary material, temporal information brings less improvement in the proposed method. Why the result when using 81 frames as PoseFormer is not compared? Could the authors give more explaination about this?
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 3 good
Limitations: The authors have discussed their limitation of motion jitter without temporal information in the supplementary material. But the limitation of memory consumption should also be discussed as mentioned in the weakness.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: # Author Response to Reviewer dcLQ
## 1. Clarifications on Novelty
We thank the reviewer for sharing related works, and we will discuss them in the `Related Work` section of the final version to make our work more complete. To address the reviewer's concern regarding novelty, we refer the reviewer to our `General Response`, where we comprehensively reaffirm the **novelty** and **contributions** of our approach. In addition, we would like to offer more clarifications on the difference between our method and the works [1,2,3] mentioned by the reviewer.
While these works [1,2,3] also use image features, our work is *clearly different* for the reasons below:
**1) Different Pipeline:** Our method follows the *two-stage* pipeline (2D pose detection then lifting, the dominant paradigm in recent years); Those works [1,2,3] estimate 3D human pose *end-to-end* from images *without explicitly introducing 2D human poses*. While using image features is **natural** even **stereotypical** for direct estimation (end-to-end) approaches, it is **non-trivial** to retrieve feature maps for lifting-based methods.
**2) Different Feature Sources and Different Ways to Use Image Features:** In our work, we **reuse** feature maps from 2D pose detectors, which are **inherently** part of a lifting-based 3D HPE pipeline. Plus, we leverage feature maps **sparsely**, i.e., we extract and aggregate **point-wise feature vectors** based on the estimated 2D joints; Those works [1,2,3] introduce **independent** networks to extract image features and then use feature maps **as a whole** instead of in a sparse manner.
**3) Different Research Focus:** We propose to leverage image features to remove the dependency of existing (two-stage) lifting-based methods on temporal information; Yin et al. [1] use image features to learn more accurate depth information; Habibie et al. [2] disentangle features for 2D pose from global image features to exploit large scale 2D pose datasets; Sun et al. [3] unify heatmap-based and regression-based methods, and learn 3D heatmaps from image features.
[1] Context-Aware Network for 3D Human Pose Estimation from Monocular RGB Image. IJCNN 2019.
[2] In the Wild Human Pose Estimation Using Explicit 2D Features and Intermediate 3D Representations. CVPR 2019.
[3] Integral Human Pose Regression. ECCV 2018.
## 2. Memory Issue
| Method | Venue | Frame Number | MPJPE$\downarrow$ (mm) | GPU Memory (MB) |
|:----------------:|:-------:|:------------:|:----------------------:|:---------------:|
| MHFormer | CVPR'22 |351|43.0|1651|
| MixSTE | CVPR'22 |243|40.9|17305|
| CA-PF-CPN (ours) | |1|41.6|721|
We agree our method would consume more GPU memory than other lifting-based methods *when the frame number is kept the same*. However, we would like to mention that our approach does *not* cost more GPU memory compared to state-of-the-art multi-frame approaches that use heavy temporal modeling.
We test the memory consumption of different methods during *stable training* using the same PyTorch code base on a single RTX 3090 GPU (24GB) with a batch size of 8. The results in the table above are from the command "torch.cuda.max_memory_allocated()" provided by PyTorch to return the peak allocated memory since the beginning of this program. The results show that our method costs less GPU memory than MHFormer and MixSTE by 2.3$\times$ and 24.0$\times$ respectively while achieving comparable performance.
We thank the reviewer for mentioning this issue and will incorporate the results and analysis in our `Limitation` section.
## 3. Temporal Modeling
| Method | Frame Number | MPJPE$\downarrow$ | MPJVE$\downarrow$ |
|:--:|:--:|:--:|:--:|
| CA-PF-S |1|44.7|8.5|
| / |3|44.2|4.8|
| / |9|43.4|3.4|
| / |27|40.2|2.1|
**Note:** CA-PF-S is a *small* model variant. The last row (27 frames) is our new result.
### 3.1 Less Improvement?
We additionally experiment on 27 frames, and the result is presented above (with previous results). We agree our temporal model gains less accuracy when using small frame numbers (e.g., 3 frames). However, more video frames bring significant improvements, e.g., our 27-frame model improves MPJPE and MPJVE by **7.8%** and **38.2%** respectively over the 9-frame variant. We think the reason is that: Large human motions in long videos provide meaningful spatial varieties, and such temporal cues help reduce ambiguities in 2D-to-3D lifting. On the contrary, short video clips only include small motions while our joint-context features already encode such spatial changes (around joints). Therefore, improvements are minor when using small frame numbers.
### 3.2 Clarifications on Not Using 81 Frames
In `Sec. 3 of the supplementary`, we extend our model to multi-frame settings **primarily aiming at solving the temporal consistency** issue (we discussed in `Supp. Sec. 2`), **not to pursue higher accuracy with more frames**. Important results from our experiments are listed below:
1. Even **short-term** temporal information helps reduce jitters well (only 3 video frames bring a 43.5% reduction in velocity error);
2. Our video-based model demonstrates better temporal consistency (and accuracy) than PoseFormer, e.g., our 9-frame model achieves less velocity error (MPJVE) than 9-frame PoseFormer (3.4 v.s. 4.8mm) and comparable results with 81-frame PoseFormer (3.4 v.s. 3.1mm);
3. Current experiments (up to 27 frames) demonstrate **consistent** improvements in both **accuracy** and **temporal smoothness**. We expect our model to further gain improvements using more input video frames.
Given these results, the issue regarding temporal consistency should be well resolved by **short-term** temporal modeling, we think it **unnecessary** to further scale up the sequence length, which indeed **runs against our research motivation** (i.e., to reduce the reliance on temporal information).
We are open to discussions if the reviewer has further concerns or questions!
---
Rebuttal Comment 1.1:
Comment: Thanks for the response. The memory issue I metioned should be compared to the single-frame-based methods if this method is presented as a single-frame pose estimation baseline. Although some points are overclaimed in the general response, I will keep my rating based on the strengths listed in my original review.
---
Reply to Comment 1.1.1:
Comment: We express our gratitude to the reviewer for the feedback! We will discuss the memory issue in our `Limitation` section following the suggestions from the reviewer. Additionally, we are committed to refining our paper to avoid potential overstatements. | Summary: This paper proposes Context-Aware PoseFormer, the key idea is to extract multi-scale spatial features for 3D pose lifting from 2D pose.
The authors claim that the single-frame beats hundreds of frames and demonstrate that their single-frame methods achieve comparable or better results than multi-frame methods, such as poseformer.
Strengths: The motivation of this article is clear, and the writing is simple and easy to understand.
The experiments in this paper verify the effect of the proposed method, verify that the spatial context information is very important, and the effect of the proposed context aware poseformer.
The ablation study in this article is detailed and verifies the effect of each module in the method.
Weaknesses: I am skeptical of the claim in this paper that the single frame is better than the multiple frame method. This article only verifies that his single-frame method is better than some multi-frame methods such as poseformer, but there are 3 problems below:
1) The comparison of the methods in table 1 is unfair. For example, other methods use the 2D pose result of CPN as the output of the lifting network, while the method in this paper uses HRNet. It is well known that HRNet can perform better 2D detection than CPN.
2) In addition, the method in this paper also inputs more visual features from images, so this paper should also be compared with other methods that use visual features, such as IVT.
3) If the single-frame method in this paper works well under the same 2D network, it cannot be verified that it is definitely better than the multi-frame method. In the method of this article, is it possible to achieve better results by adding timing information? This is the consensus in the field, and this paper should also verify this.
In addition, the core contribution of the method in this paper is the multi-scale visual features of the image. The author uses a deformable method to extract multi-scale features to improve the performance of the network.
In the field of 3D posture, it is a consensus that extracting better spatial context features can improve performance.
In terms of context information extraction, this article has no additional contribution, only using other people's methods.
Therefore, in general, the method in this paper is a bit incremental in terms of innovation, and the contribution is not enough to be accepted by NeurIPS。
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: see weakness
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 2 fair
Presentation: 3 good
Contribution: 2 fair
Limitations: No
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: # Author Response to Reviewer jUmb
## 1. Clarifications on Our Claim
In this paper, we enable a single-frame approach to outperform multi-frame ones (e.g., that uses 351 frames) for the first time. We believe that strong experimental results (`Tab. 1` and `Tab. 2 in the main paper`) well verify the effectiveness of our method, and the reviewers (**THhA**,**ez57**,**4XvM**,**dcLQ**) also appreciate our results. We also include more recent papers for comparison. Please check the `Performance Comparison` section below!
As new works are emerging in this field, we agree that we can *never* beat all multi-frame methods at any time, and therefore **we have *not* reached such a highly definitive conclusion**. However, we do **open up the *possibility* of significantly improving the performance of 3D HPE *in a highly disadvantaged position*** (i.e., with *no* access to temporal information), and the results we achieve are already encouraging and attractive (**4XvM**). We are open to suggestions from the reviewer to help us remove such confusion in the final version.
## 2. Performance Comparison
### 2.1 Fairness:
We believe **comparing our CA-PF-CPN model variant with other state-of-the-art methods is fair**, as CPN is consistently used as the 2D pose detector (in `Tab. 1` we use one column to show the 2D pose detector used by different methods). In such a fair setting, our method (41.6mm, lower is better) outperforms a series of works, e.g., 81-frame PoseFormer [1] (44.3mm), 351-frame MHFormer [2] (43.0mm), and 243-frame P-STMO [3] (43.0mm), and achieves comparable results with 243-frame MixSTE [4] (40.9mm). We experiment with other backbones (e.g., HRNet) to primarily show the flexibility of our method to incorporate different 2D pose detectors. We will add more clarifications about the fairness of our experiments in `Sec. 4 of the main paper`.
### 2.2 Completeness:
We had tried our best to gather both single-frame and multi-frame methods for comparison before CVPR 2023 papers were released (`Tab. 1 in the main paper`). Following the advice from review **ez57**, we compare with more recent SOTAs, including several CVPR 2023 and ICCV 2023 papers, and we will incorporate them in our final version. Please check our response to review **ez57**!
[1] 3D Human Pose Estimation with Spatial and Temporal Transformers. ICCV 2021.
[2] MHFormer: Multi-Hypothesis Transformer for 3D Human Pose Estimation. CVPR 2022.
[3] P-STMO: Pre-Trained Spatial Temporal Many-to-One Model for 3D Human Pose Estimation. ECCV 2022.
[4] MixSTE: Seq2seq Mixed Spatio-Temporal Encoder for 3D Human Pose Estimation in Video. CVPR 2022.
## 3. Comparison with IVT [5]
We thank the reviewer for sharing such great work, and *we will include this paper in our `Related Work` section*. Our approach **principally differs** from IVT in the following aspects, although both methods use image features:
**1) Motivation:** Our work aims at removing heavy temporal dependencies of existing lifting-based 3D HPE methods; IVT targets formulating video-based 3D HPE as an end-to-end learnable framework.
**2) Task Pipeline:** We follow the dominant two-stage pipeline, which **explicitly uses 2D human poses as an intermediate representation**. We do **not** use any temporal information; IVT **directly** estimates 3D human poses from **videos** (2D human poses are **not** involved as an intermediate output, and **they use temporal information**).
**3) Implementation Details:** We use input images of size 256 $\times$ 192, while IVT uses much larger images of size 512 $\times$ 512.
Since IVT is **not open-sourced**, we are unable to conduct a direct comparison. However, more analysis and discussions regarding this work will be placed in the final version to make our work more complete!
[5] IVT: An End-to-End Instance-guided Video Transformer for 3D Pose Estimation. ACM MM 2022.
## 4. Multi-frame Extension
| Method | Frame Number | MPJPE$\downarrow$ | MPJVE$\downarrow$ |
|:-------:|:------------:|:-----------------:|:-----------------:|
| CA-PF-S | 1 | 44.7 | 8.5 |
| / | 3 | 44.2 | 4.8 |
| / | 9 | 43.4 | 3.4 |
| / | 27 | 40.2 | 2.1 |
**Note:** CA-PF-S is a *smaller* model variant (than our full model) since temporal processing requires more computation and memory. The last row (27 frames) is our new result.
This is an insightful question! We have already verified the flexibility of our method to extend to multi-frame settings in `Sec. 3 of the supplementary`! We move some results here with a new 27-frame result. The table above shows that our approach **gains consistently** in terms of **performance** (MPJPE, position error) and **temporal smoothness** (MPJVE, velocity error) using more video frames. Please check the supplementary for more details!
## 5. Concerns about Contribution
We disagree that "the core contribution of our work method in this paper is the multi-scale visual features of the image." Leveraging visual representations from 2D pose detectors itself is a **non-trivial** and **novel idea** in the context of lifting-based 3D HPE, while our **novel framework** (**dcLQ**) to extract multi-scale joint-context features and fuse context features and joint embedding is **only one of our contributions**. Plus, we are the first to make a single-frame method outperform state-of-the-art multi-frame methods. We show our first attempt to tackle the heavy temporal reliance of existing lifting-based methods and establish strong single-frame baselines for future research.
We hope the reviewer could kindly check our `General Response` to all reviewers, where we comprehensively elucidate the **novelty** and **contributions** of our method. We are open to discussions if the review has further questions or concerns.
---
Rebuttal Comment 1.1:
Comment: Thanks for authors' feedback.
First, the additional experiments solve the concerns on fair comparison and temporal information.
From the experiments, temporal information is also useful. Thus, I don't think the title of this paper is suitable, although their method outperforms other temporal methods.
Second, the most important thing, as authors claimed that ''1. We Identify and Tackle A New Research Problem', ...'.
They claimed that they want to reduce the temporal information, but the additional experiments verified that temporal information could improve performance. Besides, if we do not use temporal information, the detected 3D poses are not smooth in temporal dimension, it's no practical in the application.
So I don't agree with the authors' claim about this. I can agree that it's a good baseline compared with single-image methods. But I strongly disagree with the author's thesis on temporal motivation and contribution, which should be in single-frame approach track.
Third, authors claimed that ''2. We Provide Fresh Insights regarding the Problem Cause ....''.
I also do not agree with this claim. Actually, there are many methods use visual features to improve the performance of 3D pose lifting.
It's overclaim. I will provide some works later.
I am looking forward your response.
---
Reply to Comment 1.1.1:
Comment: ### Response to Reviewer jUmb [1/2]
We thank the reviewer for the detailed feedback and are glad to address the concerns further!
> First, the additional experiments solve the concerns on fair comparison and temporal information.
We appreciate the reviewer for acknowledging that our performance comparison is fair and that our experiments on temporal modeling are solid.
> From the experiments, temporal information is also useful. Thus, I don't think the title of this paper is suitable, although their method outperforms other temporal methods.
>
> Second, the most important thing, as authors claimed that ''1. We Identify and Tackle A New Research Problem', ...'. They claimed that they want to reduce the temporal information, but the additional experiments verified that temporal information could improve performance.
We also appreciate the reviewer for acknowledging the superiority of our single-frame method over other multi-frame methods.
We wish to emphasize that the assertion **"temporal information still helps improve performance"** does *not* contradict the fact that **"our approach achieves strong performance without using temporal modeling."** It is important to clarify that we have *never* negated the significance of temporal information. In fact, the enhancements derived from temporal information underscore the **versatility** and **scalability** of our method.
Our title endeavors to emphasize a new **alternative** to the conventional approach of incorporating an increased number of video frames, consequently enhancing the accuracy of 3D human pose estimation. While we acknowledge that our title might sound overly assertive, we are receptive to refining it. What is the reviewer's perspective on the title **"Single 2D Pose with Context is Worth Hundreds of Frames for 3D Human Pose Estimation"**? We welcome the reviewer's insights and recommendations.
> Besides, if we do not use temporal information, the detected 3D poses are not smooth in temporal dimension, it's no practical in the application. So I don't agree with the authors' claim about this.
We agree that temporal smoothness is a limitation of our method. **It's important to note that temporal smoothness is a general challenge faced by all single-frame methods in contrast to multi-frame methods, as non-temporal approaches inherently struggle to ensure such smoothness.** Thus, it's reasonable to assert that our single-frame method should not be critiqued on this particular dimension.
Nevertheless, it's worth highlighting that in `Figure 5, Section 4.4`, our approach demonstrates improved temporal smoothness when compared against the context-agnostic baseline, despite not having direct access to temporal information.
In `Tab. 1 of the supplementary`, we also provide a solution to solve such limitation by extending our method to model **short-term** temporal dependencies. In terms of the MPJVE metric (where lower values indicate better smoothness), our 9-frame temporal model achieves comparable results with 81-frame PoseFormer (3.4 v.s. 3.1mm); our 27-frame model even significantly outperforms 81-frame PoseFormer (2.1 v.s. 3.1mm). These experimental findings underscore that **our approach reduces the necessity for extensive long-term temporal modeling to achieve satisfactory temporal smoothness, distinguishing it from conventional multi-frame methods**.
Consequently, the introduction of short-term temporal extensions to our model (e.g., utilizing 9 video frames) holds real-world applicability in mitigating smoothness concerns. | Summary: The paper targets the challenging 3D pose estimation problem based on a new context-aware lifting algorithm. The proposed approach is simple to implement and reproduce. Attractive experimental results have been reported on the challenging benchmarks. Also, the detailed ablations well validate the design of the proposed algorithm.
Strengths: 1. The idea of lifting 2d image features for 3D human pose estimation is interesing.
2. The proposed algorithm report state-of-art performance on the 3d pose estimation benchmark.
3. It also provides sufficient ablation studies to validate the algorithm design of the proposed approach.
Weaknesses: 1. In stage 2, it involves a Deformable context extraction module to extract the context features (F1, F2, F3, P). How is the feature difference against H1-H3? Also, is it possible to add H1-H3 with positional embeding to the pose-context feature fusion module?
2. It would be reasonable that 2D pose information can be helpful for the 3d pose estimation. But lifting from 2d to 3d is not-trivial. How could the proposed approach guarantee the consistency with the 3d location of the pose?
3. If there are more temporal frames available, is it possible to further boost the performance of the proposed approach?
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Please address the questions in the weakness section.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The paper does not discuss the potential limitations of the proposed algorithm. I would suggest the authors to have a discussion of the limitations (like the cases with high self-occlusion which temporal information may be helpful).
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: # Author Response to Reviewer 4XvM
## 1. Details about Deformable Context Extraction
$H_1$-$H_3$ refer to the *raw feature maps* produced by 2D pose detectors, while $F_1$-$F_3$ are *extracted feature vectors* named "Context Features" from the corresponding feature maps using *Deformable Context Extraction* (DCE). Specifically, we initialize $F_1$-$F_3$ as the feature vectors directly sampled at the detected 2D joints from $H_1$-$H_3$. As the detected joints inevitably introduce noise, e.g., due to occlusions, we use deformable attention [1] to produce a small set of sampling points around each detected joint and fuse their features. We provide a visualization of sampled points in `Fig. 4 of the main paper`. Such sampling-and-feature-mixture process happens for N1 (the layer number of DCE) times, and the final output of DCE that represents the context features for each joint from each feature map is denoted by $F_1$-$F_3$. $P$ is the linear embedding of 2D joint coordinates. Please check `Fig. 3` and `Sec. 3 (L173-200) in the main paper` and `Sec. 5 in the supplementary` for more details and visualization!
[1] Deformable DETR: Deformable Transformers for End-to-End Object Detection. ICLR 2021.
## 2. Add H1-H3 to the Pose-Context Feature Fusion module
*Pose-Context Feature Fusion* primarily aims at fusing image cues (joint-context features, $F_1$-$F_3$) and position cues (joint-coordinate embedding, $P$) for each joint using transformers. Adding $H_1$-$H_3$ to *Pose-Context Feature Fusion* is possible and may provide more improvements as they intuitively encode rich context features. However, some concerns should be addressed:
1. Processing feature maps ($H_1$-$H_3$) generally costs more GPU memory and computation compared to vectorized features ($F_1$-$F_3$) since $H_1$-$H_3$ also contain features for uninformative background;
2. While it is convenient for transformers to process feature vectors ($P$, $F_1$-$F_3$) with the same dimension, it is not straightforward to process feature vectors ($P$) and feature maps ($H_1$-$H_3$) simultaneously as they have different shapes. We may need extra model design to make such a process possible.
Given the concerns above, we prefer to use vectorized features ($F_1$-$F_3$) rather than feature maps ($H_1$-$H_3$). Since we extract $F_1$-$F_3$ from $H_1$-$H_3$ based on detected 2D joints, they are reasonably informative to provide desirable joint-context features. We thank the reviewer for such an enlightening question, and we will incorporate the analysis and explanations about module design in our `Method` section in the final version.
## 3. How to ensure 2D-to-3D consistency?
This is **not** a weakness of our method! We follow the *common practice* in the domain: the format (e.g., joint number and type) of 2D joints and 3D joints (input-output pairs) are pre-defined and aligned before training. Then our network learns to lift 2D joints to 3D via supervised learning using paired 2D and 3D data.
## 4. Multi-frame Extension
| Method | Frame Number | MPJPE$\downarrow$ | MPJVE$\downarrow$ |
|:-------:|:------------:|:-----------------:|:-----------------:|
| CA-PF-S | 1 | 44.7 | 8.5 |
| / | 3 | 44.2 | 4.8 |
| / | 9 | 43.4 | 3.4 |
| / | 27 | 40.2 | 2.1 |
**Note:** CA-PF-S is a small model variant since temporal processing requires more computation and memory. The last row (27 frames) is our new result.
In `Sec.3 of the supplementary`, we have described a simple way to extend our single-frame approach to model temporal dependencies. Here we move some results from `Supp. Tab. 1` with a **new** result using 27 frames, showing that by using more video frames, our method *gains consistently* in terms of *performance* (position error, MPJPE) and *temporal smoothness* (velocity error, MPJVE). Given the results above, we expect our model to improve further if we increase the frame number.
## 5. Discussion on Limitation
Indeed, we **have included** the discussion on the limitation of our method in `Supp. Sec. 2`. Please check! Besides, we agree that temporal information helps to resolve self-occlusion by providing features from unoccluded video frames. Interestingly, we find that *contextual features may also improve the robustness of the model to self-occlusion*. In `Fig. 5 of the supplementary`, we provide two video clips where strong self-occlusion (due to pose or clothing) makes the 2D human pose unreliable. Despite inaccurate 2D input, our method gives robust results since we also leverage spatial contextual clues from images to localize joints in 3D in addition to noisy 2D joints.
We think "how our model deals with self-occlusion and how temporal information helps model" would provide exciting insights, and we will include more analysis in the final version. Plus, we will move the `Limitation` section to the main paper. We thank the reviewer for such an insightful suggestion.
---
Rebuttal Comment 1.1:
Comment: The rebuttal well addressed most of my concerns.
---
Reply to Comment 1.1.1:
Comment: We appreciate your feedback! If you have any more concerns, we're more than happy to assist in addressing them. | Summary: This paper leverages the readily available intermediate visual representations for 3d human pose estimation. The method discards temporal information to solve the time-intensive issue of existing lifting-based methods. The authors design a simple pipline, named Context-Aware PoseFormer, to extract informative context features and fuse these features to learn more positional clues.
After reading authors' rebuttal, my previous concerns were all addressed.
Strengths: 1.The paper is generally well written. The problem to address and the shortcomings of the existing approaches are discussed well.
2. The motivation and observation are clear and meaningful, and the network is well designed to solve these problems.
3.The experiments demonstrate well the superiority of the proposed method over the prior art. The authors also provide adequate ablation studies.
Weaknesses: More STOA methods in 2023 should be chosen for comparison.
Technical Quality: 4 excellent
Clarity: 3 good
Questions for Authors: 1. Does the resolution of input images influence the performance?
2. It would be interesting to apply the proposed method to other related tasks like hand pose estimation to verify its generalization capability.
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 4 excellent
Presentation: 3 good
Contribution: 3 good
Limitations: The authors should describe the limitation of the proposed method.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: # Author Response to Reviewer ez57
## 1. More Comparisons with SOTA in 2023
We thank the reviewer for the advice on performance comparison! We agree that comparing with more papers would make our work more complete. We include recently released CVPR'23 and ICCV'23 papers in the table below and will incorporate the results in our final version.
| Method | Venue | 2D Pose Detector | Frame Number | MPJPE on Human3.6M$\downarrow$ (mm) |
|:---------------------:|:--------:|:----------------:|:------------:|:-----------------------------------:|
| MPM [1] | arXiv'23 | CPN | 243 | 42.3 |
| STCFormer [2] | CVPR'23 | CPN | 243 | 41.0 |
| GLA-GCN [3] | ICCV'23 | CPN | 243 | 44.4 |
| CA-PF-CPN (ours) | | CPN | 1 | 41.6 |
[1] MPM: A Unified 2D-3D Human Pose Representation via Masked Pose Modeling. arXiv 2023.
[2] 3D Human Pose Estimation with Spatio-Temporal Criss-cross Attention. CVPR 2023.
[3] GLA-GCN: Global-local Adaptive Graph Convolutional Network for 3D Human Pose Estimation from Monocular Video. ICCV 2023.
All approaches listed above (except ours) use **243** video frames, while we only use a **single** frame. Compared to the latest SOTAs in 2023, our approach is still highly competitive: Our CA-PF-CPN outperforms MPM [1] (arXiv'23) and GLA-GCN [3] (ICCV'23) by 1.7% and 6.8% respectively and achieves comparable performance with STCFormer [2] (CVPR'23). Our performance can be further improved using larger backbones (`Tab. 1, main paper`) or temporal information (`Sec. 3, supplementary`).
## 2. The Impact of Image Resolution
Thanks for such an insightful question! We think it is exciting and meaningful to explore the impact of image resolution on our method. Our paper uses images of size 256 $\times$ 192, a standard setting in COCO 2D HPE. As 384 $\times$ 288 is also widely used for input image size in 2D HPE, we experiment on our CA-PF-HRNet-32 with such resolution, and the results are shown below.
| Method | Image Size | GFLOPs of Backbone | MPJPE$\downarrow$ (mm) |
|:---------------------:|:----------------:|:------------------:|:----------------------:|
| CA-PF-HRNet-32 (ours) | 256 $\times$ 192 | 7.1 | 41.4 |
| / | 384 $\times$ 288 | 16.0 | 39.3 |
Increasing the image size from 256 $\times$ 192 to 384 $\times$ 288 brings a 5.1% error reduction. While the model architecture and hidden dimensions are unchanged for the backbone network, it produces larger feature maps. Larger feature maps encode more fine-grained joint-context features and therefore improving performance. This result is consistent with our finding in `Tab. 4 of the main paper`, showing that high-resolution features contribute more than low-resolution ones. We will incorporate the result and analysis in our `Abation Study (Sec. 4.3, main paper)`.
## 3. Generalization Ability to Related Fields
We apologize for being unable to show the results at present, as our computational resources are limited. Exploring the generalization ability of our approach to other domains will definitely be our future direction! For example, we expect our method to work on human mesh reconstruction, hand pose estimation, and, more generally, other 3D tasks where the 2D skeleton representation of the target 3D object is readily available. We believe leveraging the well-learned 2D features that produce such skeletons will reduce ambiguities in 2D-to-3D lifting and significantly improve the performance of the 3D task. We will open a `Future Work` section in the final version to discuss potential directions.
## 4. Discussion on Limitation
In the `supplementary`, we have included the discussion on the limitation of our method in `Sec. 2` and our corresponding solution in `Sec. 3`. Please check!
---
Rebuttal Comment 1.1:
Comment: I have no further questions. I recommend to accept this paper.
---
Reply to Comment 1.1.1:
Comment: We thank the reviewer again for the considerate feedback and valuable insights! We will integrate the experimental results and analysis to enhance the comprehensiveness of our paper. | Rebuttal 1:
Rebuttal: # General Response
We thank the reviewers for their careful reading and considerate feedback, and we are thrilled to receive the rating of 4 5 5 7 7!
We are glad that reviewers unanimously agree that *Context-Aware PoseFormer* is a **simple but effective approach** demonstrated by **strong experimental results** ("considerable benefits" (THhA), "superiority over the prior art" (ez57), "attractive experimental results" (4XvM), "experiments verify the effect of the proposed method" (jUmb), "new SOTA results" (dcLQ)). We’re further glad that reviewers agree that our **idea is interesting** ("interesting insight" (THhA), "idea is interesting" (4XvM)) and that our **motivation is well illustrated** with **clear writing** ("motivation and observation are clear and meaningful" (ez57), "motivation is clear" and "easy to understand" (jUmb), "paper is well presented and organized," and "idea is clear" (dcLQ)). The reviewers also agree that our ablations are "adequate" (ez57), "detailed" (jUmb), "insightful" (dcLQ), and "well validate the design of the proposed algorithm" (4XvM).
However, some reviewers raised their concerns regarding the novelty of our approach. To address those concerns, we first reaffirm the **novelty** and **contribution** of the proposed method, then answer specific questions for each reviewer in the corresponding rebuttal space, and we will incorporate all feedback in the final version.
## 1. We Identify and Tackle A **New Research Problem**
The dominant paradigm in 3D human pose estimation (HPE) literature is lifting a 2D joint sequence to 3D (dubbed as *lifting-based* methods, in contrast to direct estimation from images). Heavily using temporal information (with up to 351 video frames) has been a *default* setting in the field and has proved to boost performance. However, we point out in this paper that such reliance on temporal information brings several problems: high computation, performance saturation, and the non-causal issue. Despite the problems, **reducing the heavy reliance on temporal modeling in lifting-based 3D HPE has not yet been visited. We for the first time enable a single-frame method to outperform state-of-the-art multi-frame methods that even use hundreds of video frames.**
### Broader Impacts on the Community
Learning more powerful spatial-temporal representations with ever-increasing video frames (up to 351) has long been the research focus in lifting-based 3D HPE. However, **our approach breaks through such common practice (i.e., using no temporal information) while achieving encouragingly strong performance**. We establish solid baselines for the community and hope our work inspires more research to **think out of the box**: large-scale spatial-temporal modeling is not the only way to improve 3D HPE. We also believe that a research community should embrace different approaches, and our method can be such a starting point.
## 2. We Provide **Fresh Insights** regarding the Problem Cause
We discover the fundamental cause of existing methods' heavy temporal reliance by revisiting the dominant two-stage pipeline in the literature (L42-56). We point out that the 2D skeleton alone is insufficient to deal with ambiguities in 2D-to-3D lifting, making previous works resort to long-term temporal clues to mitigate ambiguities. To address this problem, we propose to retrieve the *discarded* visual representations (intermediate feature maps) learned in the 2D pose detection stage, as such representations encode visual cues from images that potentially help to reduce ambiguities. These insights are **fresh** in the domain and appreciated by reviewer **THhA**.
## 3. We Design a **Novel Framework** to Solve the Problem
"How to leverage the visual representations learned by 2D pose detectors" is a **non-trivial** question. Naively incorporating the global image features into the lifting process may introduce unnecessary memory and computational costs on the background. We comprehensively consider the task pipeline and design a novel approach to extract most task-relevant information, named *joint-context features* from the feature maps with detected 2D joints as a reference. This approach enables us to attend to the most informative regions, i.e., the joints of our interest. Then we fuse the extracted context features with joint coordinate embedding using transformers. **The collaboration of two stages (2D HPE and 2D-to-3D lifting) and the approach to extract joint-context features from images are novel in lifting-based 3D HPE** ("novel framework" (**dcLQ**)).
### While Using Image Features is Not New, What Makes Our Approach Different?
Although image features may have been explored in related fields, it does not indicate a lack of novelty in our work. Leveraging feature maps from 2D pose detectors is **non-trivial** in the context of lifting-based 3D HPE, as lifting only 2D joints to 3D is the *default* pipeline. **Our idea's uniqueness shines through its thorough understanding of the task setting.** We derive solutions directly from the task pipeline (i.e., leveraging readily available feature maps based on detected 2D joints, all elements are *off-the-shelf* in the pipeline), eliminating the need for external interventions (e.g., without introducing an extra network to extract image features). We believe it is important to evaluate the novelty of a research idea based on the domain challenge, as different domains may face different challenges.
## 4. Other Merits: Our Method is **Easy** and **Simple**
1) We *reuse* 2D pose detector feature maps instead of introducing extra networks, avoiding potentially high computational overheads;
2) 2D pose detectors are only pre-trained with 2D HPE (i.e., such backbones are largely available), requiring no finetuning on the 3D task or multi-stage training, which eases the training pipeline;
3) Our method is compatible with different 2D pose detectors;
4) Our framework is simple (**THhA**,**ez57**,**4XvM**). | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: This paper introduces Context-Aware PoseFormer, a new approach that leverages intermediate visual representations from pre-trained 2D pose detectors to implicitly encode spatial context for 3D human pose estimation. Despite the simple network structure, the proposed method outperforms existing state-of-the-art single- and multi-frame methods by a large margin without relying on temporal information.
Strengths: -This paper provides an interesting insight that leverages the spatial contextual information of pre-trained 2D pose detectors, referred to as "context-aware" information, to improve the accuracy of 3D human pose estimation.
- The proposed approach provides considerable benefits by elevating the accuracy of single-frame methods to a level on par with state-of-the-art multi-frame methods
Weaknesses: -Incremental novelty in the idea of context aware features and lack of discussion on the existing context aware features.
- Modeling context-aware operations or networks has been studied in many other computer vision topics. For example, pixel-aligned features [1, 2] have been extensively used in 3D human/hand reconstruction from single/multi images. This pixel-aligned feature is demonstrated to be the most useful module in Table 3 while other modules only improve the performance marginally.
- Incomplete evaluation regarding the pretrained context-aware feature network.
- For the pretraining, what's the impact of different pre-training datasets (e.g. number of training data) on the estimation?
- For the pretraining backbone, what's the impact of latest backbone on the final results? e.g. ViT-based [3] pre-trained backbones MAE [4].
Reference
- [1] Pymaf: 3d human pose and shape regression with pyramidal mesh alignment feedback loop. ICCV 2021
- [2] Pifu: Pixel-aligned implicit function for high-resolution clothed human digitization. ICCV2019
- [3] An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale ICLR
- [4] Masked autoencoders are scalable vision learners. CVPR 2022
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Please see the weakness
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Please see the weakness
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: # Author Response to Reviewer THhA
## 1. Clarifications on Novelty
We disagree that the novelty of our idea is incremental. We hope the reviewer could check our `General Response` for more comprehensive clarifications on the novelty of our work.
We thank the reviewer for sharing related works that leverage pixel-aligned features [1,2], and we will discuss them in our `Related Work (Sec. 2)` in the final version! However, our method can be distinctively differentiated from them in the following aspects:
**1) Motivation:** Pixel-aligned features [1,2] were primarily proposed to **reduce the misalignment** between estimated 3D representations and input 2D images. However, we propose to leverage joint-context features to **reduce ambiguities** in lifting-based 3D HPE, e.g., depth ambiguity, thus removing the need for long-term temporal processing.
**2) Source of Feature Maps and the Way to Extract Features of Interest:** Even though our works use feature maps in a point-wise manner similar to pixel-aligned features [1,2], technical details differ significantly. First, while PyMAF [1] and PIFu [2] introduce an **independent** image encoder to produce image features, we **reuse** the features maps from 2D pose detectors, which are *inherently* part of the two-stage 3D HPE pipeline. Plus, we extract joint-context features based only on **estimated 2D joints**. On the contrary, pixel-aligned features [1,2] are extracted by **projecting estimated 3D** meshes [1] or query 3D points [2] on the image plane with camera parameters to enforce the consistency between 2D input and 3D estimation. We do not use such 3D-to-2D projection.
**3) Different Research Task and Pipeline:** Given input images, PyMAF [1] reconstructs human *meshes*, and PIFu [2] learns an implicit function for human *surfaces* and *textures*. In contrast, our work focuses on lifting-based 3D HPE, namely inferring 3D *joint coordinates* from images where 2D human joints are intermediate representations. PyMAF [1] and PIFu [2] do *not* use 2D human joints.
[1] Pymaf: 3d human pose and shape regression with pyramidal mesh alignment feedback loop. ICCV 2021.
[2] Pifu: Pixel-aligned implicit function for high-resolution clothed human digitization. ICCV 2019.
## 2. Importance of Different Modules
We agree in Tab.3 joint-context features bring the most significant improvements over other modules. However, *Pose-Context Feature Fusion* and *Deformable Context Extraction* also provide *non-trivial* gains (**1.6%** and **3.3%** respectively). We want to highlight the **difficulty** of such gains from the two modules:
1. The two modules build on an **already-competitive** baseline (43.5mm on Human3.6M, lower is better), which is comparable to many state-of-the-art methods, e.g., 351-frame MHFormer (43.0mm, CVPR'22) and 243-frame P-STMO (43.0mm, ECCV'22);
2. They bring an error reduction of 2.1mm (from 43.5 to 41.4mm) together, while MHFormer (CVPR'22) and P-STMO (ECCV'22) only outperform the prior art PoseFormer (44.3mm, ICCV'21) by 1.3mm.
## 3. More Evaluation
We also thank the reviewer for the advice on more evaluation regarding the pre-trained backbone networks. **We have already conducted experiments with commonly used 2D pose detectors**, e.g., SimpleBaseline (ResNet-50), CPN, and HRNet, on two pre-training tasks, i.e., (the most popular) COCO 2D HPE and ImageNet classification. Such comparisons can be found in the `main paper (Sec. 4.1)` and `supplementary material (Sec. 6)`.
We additionally conduct experiments on ViTPose [3], the recent SOTA on 2D HPE. The weights of ViTs in ViTPose were initialized with MAE [4] pre-training, then the model was further trained on 2D HPE datasets. The results and observations are summarized below.
[3] ViTPose: Simple Vision Transformer Baselines for Human Pose Estimation. NeurIPS 2022.
[4] Masked autoencoders are scalable vision learners. CVPR 2022.
| Index | Backbone | Pre-training Datasets | Multi-scale Design | mAP on COCO$\uparrow$ | MPJPE on Human3.6M$\downarrow$ (mm) |
|:--:|:--:|:--:|:--:|:--:|:--:|
|1|CPN|COCO|Y|68.6|41.6|
|2| HRNet-32 |COCO|Y|74.4|41.4|
|3|ViT [3]|COCO|N|75.8|44.5|
|4|ViT [3]| COCO+AIC+MPII |N|77.1|41.9|
**Note:** CPN and HRNet do *not* provide official multi-dataset pre-trained weights, so we apply this setting to ViTPose.
### 3.1 The Impact of Backbone Design
**Backbone Design Outweighs the Results on 2D HPE.** The first three rows in the table above demonstrate that better results on 2D HPE do *not* necessarily translate to better performance on 3D HPE: Although ViTPose performs best on COCO 2D HPE, it achieves worst results on Human3.6M. We attribute this result to the lack of multi-scale network design. ViTPose gains from powerful MAE pre-training with modern transformer architecture, while its network design is arguably simplified for 2D HPE. Specifically, ViTPose processes on tokenized image patches with transformers and finally increases the resolution of feature maps using 2D deconvolution layers. They use *no* multi-scale designs, e.g., high-resolution feature branches or multi-scale feature fusion, as in CPN and HRNet. However, such techniques may help our approach learn more task-relevant information (i.e., joint-context features) to localize joints in 3D.
### 3.2 The Impact of Pre-training Datasets
**More Pre-training Datasets on 2D HPE Improve the Performance on the 3D Task.** A comparison between row 3 and row 4 in the table above shows that multi-dataset pre-training improves performance on both 2D HPE and 3D HPE. Plus, the gains on 3D HPE (5.8% error reduction) are even more significant than those on 2D HPE (1.3 points improvement on AP). We hypothesize that multi-dataset pre-training improves the generalization ability of learned backbone features. Therefore, the best performance on 2D HPE of ViTPose better transfers to the 3D task.
We find these results inspiring and will incorporate them in `Sec. 4` of the final version.
---
Rebuttal 2:
Comment: We kindly wish to remind the reviewer to consider our response and additional experiments. We are more than willing to provide further clarifications if there are any lingering questions or concerns. | null | null | null | null | null | null |
Lift Yourself Up: Retrieval-augmented Text Generation with Self-Memory | Accept (poster) | Summary: The paper proposes a novel framework called Selfmem to address the limitations of retrieval-augmented text generation. Compared with memory retrieval from a fixed corpus, Selfmem iteratively uses a retrieval-augmented generator to create an unbounded memory pool and select the candidate output as memory for the subsequent generation round. The paper evaluates the effectiveness of Selfmem on three text generation tasks and achieves state-of-the-art results. The paper also conducts thorough analyses of each component in the framework.
Strengths: The motivation of the primal problem and its duality is fairly intriguing. Preliminary experiments and supporting evidence demonstrate the rationality of the proposed method.
The proposed Selfmem achieves promising results on machine translation, summarization and dialogue generation, indicating its practical applicability.
The paper is overall clearly written. The problem setting and proposed approach is lucidly presented.
Weaknesses: There is a growing body of recent work [1] that is increasingly using LLMs to generate knowledge instead of retrieving it from external corpus. This paper shares similar ideas with the new paradigm and should incorporate and discuss it in the related work section.
Further clarification and empirical evaluation on the diversity of retrieval memory from Selfmem should be provided. Given the limited knowledge encoded in the trainable generator, it remains uncertain whether Selfmem can effectively handle knowledge-intensive tasks compared to retrieval from external sources.
Selfmem appears to require more computational cost than direct retrieval. However, there is no further discussion on time computation in the paper.
[1] Yu et al. Generate rather than Retrieve: Large Language Models are Strong Context Generators. ICLR 2023
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. Although you mention the computational resources in the limitations section, can you offer an empirical comparison between Selfmem with other retrieval-based methods?
2. For a more vivid demonstration, can you provide any case studies or examples to illustrate how the generated contexts from Selfmem outperform the retrieval-based ones.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: See the weakness section.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Firstly, we would like to express our sincere gratitude for the time and effort you have dedicated to reviewing our paper. We will address each comment in a point-by-point manner.
- Comment 1: discussion about using LLM to generate knowledge instead of retrieval
- Response 1: Thank you for mentioning the relevant reference by Yu et al. and saving us from LLM paper flood. We would include this paper, as well as any other pertinent studies, in the final version of our work.
---
- Comment 2: Evaluation of knowledge-intensive tasks
- Response 2: We appreciate your suggestion to incorporate knowledge into our framework for evaluation on knowledge-intensive tasks. However, it is important to note that the primary focus of this paper is not on this aspect. In the retrieval-augmented literature, there are generally two types of working systems: one that directly retrieves kNN samples from training set, wherein task knowledge is implicitly expressed through format, style, and word selection for tasks such as machine translation[1] and language modeling[2]; and the other that focuses on external knowledge retrieval for knowledge-intensive tasks, such as open domain QA[3] and fact verification[4]. Our work is primarily concerned with the former type and we leave the latter for future exploration.
[1] Urvashi Khandelwal et al. Nearest Neighbor Machine Translation
[2] Urvashi Khandelwal et al. Generalization through Memorization: Nearest Neighbor Language Models
[3] Gautier Izacard et al. Leveraging Passage Retrieval with Generative Models for Open Domain Question Answering
[4] Patrick Lewis et al. Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks
---
- Comment 3: Empirical results of computation overhead
- Response 3: Thank you for highlighting this important aspect. We have provided a detailed analysis of the computation overhead of Selfmem in comparison to baseline systems in the global rebuttal pdf file. We hope this addresses your concerns.
---
- Comment 4: Request for a more vivid demonstration
- Response 4: Upon the paper's acceptance, we will release all outputs, including those from Selfmem and baselines, and we will make an effort to identify examples that best showcase our method.
---
Rebuttal Comment 1.1:
Comment: Thank you for the detailed response with commitments. I think the overall quality of this paper is good. I will keep my score as "Weak Accept".
---
Rebuttal 2:
Comment: Dear Reviewer, I hope this message finds you well. As the discussion period for our paper is nearing its end, we kindly request your response to our rebuttal letter. We value your insights and are eager to address any concerns you may have. Your timely feedback would be greatly appreciated and will help us improve our paper. Thank you for your time and dedication to the review process. | Summary: The paper presents a framework, Selfmem, aimed at enhancing text generation tasks via memory retrieval. The core uniqueness of the framework resides in its capacity to create an unbounded memory pool by utilizing its retrieval-augmented generator and memory selector components iteratively. This allows the model to leverage its output, referred to as self-memory, for improved generation. The authors apply this framework into neural machine translation, abstractive text summarization, and dialogue generation tasks, demonstrating its efficacy with state-of-the-art results
Strengths: (1) The paper proposes the Selfmem framework, which addresses the constraint of fixed corpus in memory retrieval.
(2) The paper demonstrates the effectiveness of Selfmem on various text generation tasks.
(3) The paper provides detailed analyses of components within the Selfmem framework, revealing potential bottlenecks and providing insights.
Weaknesses: (1) It would be better if more experiments beyond neural machine translation, abstractive text summarization, and dialogue generation tasks could be conducted. It will be beneficial to see its application in other related areas since the proposed method is task-agnostic.
(2) Much of the paper's experimentation relies on certain metrics (ROUGE-1) for evaluation. The universality of the findings might be cross-checked through the usage of other performance metrics.
(3) Though the authors mention learnable retrievers, they don't discuss how the Selfmem framework may be adapted for use with such retrievers.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: (1) Have you considered using other evaluation methods (e.g., human evaluation) to further validate the performance of the framework?
(2) How can the Selfmem framework be adapted for optimal usage with learnable retrievers?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: A potential concern is, compared to human-written texts, machine-generated texts are noisier and with potential bias. Hence, is the proposed method able to avoid being affected by its own bias?
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Firstly, we would like to express our sincere gratitude for the time and effort you have dedicated to reviewing our paper. We will address each comment in a point-by-point manner.
- Comment 1: More generation tasks needed.
- Response 1: Thank you for recognizing the versatility of our method. In our current study, we have chosen three generation tasks across seven datasets to assess our framework. This decision is based on two main factors: (1) we believe that these three tasks, specifically machine translation, summarization, and dialogue generation, effectively represent text generation tasks, and (2) these tasks have been extensively employed in retrieval-augmented literature [1][2][3], providing a relatively comparable testbed for benchmarking our method. Additionally, we plan to make our code open-source, allowing researchers to verify its effectiveness in other tasks.
[1] Jason Weston et al. Retrieve and Refine: Improved Sequence Generation Models for Dialogue
[2] Jiatao Gu et al. Search Engine Guided Neural Machine Translation
[3] Nabil Hossain et al. Simple and Effective Retrieve-Edit-Rerank Text Generation
---
- Comment 2: Metrics other than ROUGE for evaluation
- Response 2: First, we would like to clarify that we did not heavily rely on ROUGE-1 for evaluation. For various generation tasks with distinct desired attributes, we employed different well-acknowledged metrics for assessment. Moreover, to cross-check our findings, we have included additional evaluation results of TER and chrF++ for translation tasks in the appendix. Furthermore, human evaluation and GPT-4 evaluation results are available in the global response PDF file for your reference.
---
- Comment 3: Combination with learnable retriever
- Response 3: Although we briefly mentioned learnable retrievers in our paper, we do not believe it is of significant relevance to Selfmem. As illustrated in Algorithm 1, the retriever's role within our framework is mainly confined to the initial step. While a learnable retriever might outperform a non-learnable one, incorporating it would not fundamentally change our framework. This is because the core concept of Selfmem lies in generating memory rather than retrieving it, based on the observation that the quality of retrieved memory is always constrained by the memory pool. Furthermore, the empirical results in Table 2 provide a direct comparison between our framework and a generation model with a learnable retriever (MonoNMT), demonstrating the superiority of self-memory over retrieved-memory.
---
- Comment 4: Debias from machine-generated text
- Response 4: Thank you for bringing up this intriguing issue. We believe that our framework has the potential to reduce bias in machine-generated text. Considering that the entire framework is optimized by the memory selection metrics $\Delta(\cdot,\cdot)$, it is feasible to introduce a debiasing term or even incorporate human feedback into the $\Delta(\cdot,\cdot)$ to improve the quality of text generation. We will consider delving deeper into this aspect in future research.
---
Rebuttal Comment 1.1:
Title: The response is read.
Comment: Thanks for your response.
---
Reply to Comment 1.1.1:
Comment: Dear reviewer,
Thanks for the comment. We wanted to check if we were able to resolve some of your concerns/questions and if you had any further comments on our work. If not, we hope our response may merit raising your score. | Summary: The traditional approach for memory retrieval is constrained by the quality of the fixed corpus from which memory is retrieved. Based on the exploration that "better generation also promotes better memory", this work proposes the Selfmem framework, which iteratively employs a retrieval-augmented generator to create an unbounded memory pool and uses a memory selector to select a generated memory for the next round. Experimental results demonstrate the effectiveness of Selfmem in fine-tuned small models and few-shot LLMs for three different text generation tasks.
Strengths: 1. The motivation of this work is novel. It iteratively employs a retrieval-augmented generator to create an unbounded memory pool and uses a memory selector to select a generated memory for the next round. This enables the model to leverage its own output to improve generation.
2. The method proposed in this paper has a good effect and has achieved improvement on multiple datasets on three text generation tasks. In addition, this work also verified the proposed method for LLM.
3. The experimental analysis of this paper is sufficient. In addition to verifying the effectiveness of the proposed method on multiple datasets, this paper also conducts thorough analyses of each component in the Selfmem to identify bottlenecks and provide insights for future research.
Weaknesses: 1. The generality of the method is limited. When the proposed method uses the memory selector to select a candidate from the candidate pool, different metrics are used for different text generation tasks, which limits the generality of the method on different tasks.
2. The cost of the proposed method is large. The Selfmem needs to execute retrieval and generation multiple times until converges.
3. There are some writing problems in this paper:
- There is an extra word “open” in line 215.
- Line 141 \phi(R) -> \phi(\mathcal{R}).
- Equation (4) MemoryEncoder(m).
- No details about how to generate candidate pool \mathbb{C}.
4. A generation example can be provided in the experimental results so that readers can feel the effect of the method in this paper more intuitively.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: 1. When the proposed method uses the memory selector to select a candidate, different metrics are used for different text generation tasks, which limits the generality of the method on different tasks. Is there a good general method or metric that can be applied to different text generation tasks? How well does the proposed method perform when using a generic method or metric?
2. How to preserve diversity when generating candidate pool \mathbb{C}?
3. How to draw the conclusion in Line 143 that 'the generator does not need to resort to external memory'? The conclusion in Line 145 also confuses me a lot, e.g. 'This observation motivates us to select memory according to metrics'. Could you explain in more detail?
4. For tasks such as dialogue generation, this work only uses automatic evaluation, lacking human evaluation. Automated assessments also lack justification.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: The authors discuss limitations in the appendix.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Firstly, we would like to express our sincere gratitude for the time and effort you have dedicated to reviewing our paper. We will address each comment in a point-by-point manner.
- Comment 1: Limited generality and universal metrics for different text generation tasks.
Response 1:
- We are grateful for the reviewer's insightful comment concerning the generality and applicability of universal metrics across various text generation tasks. While our work indeed employs different metrics for specific generation tasks, we respectfully disagree with the notion that this approach restricts the generality of our framework. Instead, we contend that this demonstrates the versatility and adaptability of our method.
- The rationale for utilizing distinct metrics for individual tasks stems from the unique characteristics and objectives of each generation task, and what our framework does is to provide a method to optimize these attributes accordingly. For instance, in neural summarization tasks, the ROUGE score is deemed more suitable than the BLEU score for several reasons, including its emphasis on recall over precision of ground truth summaries and the absence of a length penalty for summaries of varying lengths. The ROUGE score has also been shown to achieve stronger correlation with human judgments[1]. This rationale informed our decision to use ROUGE for optimizing our memory selector and, consequently, our generator. If a generation task needs diversity, we optimize for diversity. If it needs fidelity, we optimize for fidelity. The only modification needed is adjusting the memory selection metric in Algorithm 1 of our paper.
- Furthermore, we believe that the experimental results obtained across three different tasks within this single framework provide empirical evidence that our framework is not a task-specific one.
- As for the interesting question about the the existence of a universal metric suitable for all text generation tasks, irrespective of their specific requirements, we consider such a metric unlikely. If one were available, there would be no need to divide text generation tasks into multiple subfields. However, one potential solution to this challenge is the use of human preference, as employed in the optimization of Language Models (LLMs) like ChatGPT. Our framework can also be adapted to this context by adjusting the memory selection metric to reflect human preference.
We express our sincere gratitude for raising this thought-provoking issue and welcome further discussion on the subject.
[1] AlexanderR. Fabbriet al.SummEval: Re-evaluating Summarization Evaluation
---
- Comment 2: Latency caused by selfmem
- Response 2: We appreciate the reviewer's concern regarding the latency caused by our method, as mentioned in the limitations section in the Appendix. We acknowledge that our proposed method may result in higher latency compared to conventional generation scenarios. However, it is essential to note that this paper represents the first attempt in this direction, and our primary focus is on the novel aspects of the approach rather than efficiency. We believe there is significant potential for improvement in future work, such as incorporating an efficient dual encoder in place of a cross-encoder for memory selection, and reducing the number of iterations by increasing the size of the candidate pool, as suggested by Reviewer LVYL. We are eager to explore these possibilities and engage in further discussions to address the issue of latency while maintaining the integrity and effectiveness of our approach.
---
- Comment 3: Writing problems
- Responses 3: Thank you for your thorough review and we will complete another round of proofreading in our final version. As for the candidate pool generation method, we have included it in Section 4.2: "For all tasks, the candidate generation method involves a beam search with a beam width of 50.”
---
- Comment 4: Generation example
- Response 4: Thank you for bringing this to our attention. Upon the paper's acceptance, we will release all outputs, including Selfmem and baselines. Additionally, we will endeavor to find examples that best illustrate our method.
---
- Comment 5: Preserving diversity in candidate pool
- Response 5: As demonstrated in previous work [1], employing beam search is adequate for achieving significant improvement, and thus, we have not incorporated diversity into this paper's scope.
[1] Ann Lee, et al. Discriminative Reranking for Neural Machine Translation
---
- Comment 6: Explanation of the conclusion in Line 143 and Line 145
- Response 6: We apologize for any confusion that our presentation may have caused, and appreciate the opportunity to clarify. The conclusion in Line 143 is drawn from the observation that the generation confidence of the overlapping token set was generally higher than the average. This indicates that the generator is more likely to produce these low-perplexity tokens even without the provided memory. As a result, the conclusion from Line 145 could be more accurately stated as: "This observation motivates us to reconsider selecting memory based on p(y|x)." If there is any remaining confusion, we encourage you to refer to our Response 2 to Reviewer Z8iX.
---
- Comment 7: Lack of human evaluation results
- Response 7: We appreciate your suggestion and have included human evaluation results as well as GPT-4 evaluation results in the attached rebuttal global rebuttal PDF file for your reference.
---
Rebuttal Comment 1.1:
Title: For Comment 1
Comment: > While our work indeed employs different metrics for specific generation tasks, we respectfully disagree with the notion that this approach restricts the generality of our framework. Instead, we contend that this demonstrates the versatility and adaptability of our method.
I do not fully agree with this statement, because if the proposed framework is a better optimizer towards a specific metric, its performance will definitely be better than the other method that does not optimize the whole system using the metric. Additionally, the comparison with the paper 'REPLUG: Retrieval-Augmented Black-Box Language Models' is also needed, because it optimizes IR directly using the final metric.
> However, one potential solution to this challenge is the use of human preference, as employed in the optimization of Language Models (LLMs) like ChatGPT. Our framework can also be adapted to this context by adjusting the memory selection metric to reflect human preference.
Indeed, your perspective holds merit. Exploring this avenue in the future might yield intriguing results.
---
Reply to Comment 1.1.1:
Comment: > I do not fully agree with this statement, because if the proposed framework is a better optimizer towards a specific metric, its performance will definitely be better than the other method that does not optimize the whole system using the metric.
We appreciate your comment and we totally agree with you that systems designed to optimize specific metrics tend to outperform those that don't. Indeed, this is what reward model aims to achieve, as discussed in Section 2.2 (Related Work) of our paper.
However, the reranking process is just a one-way process, while our model, innovatively utilizes the duality of retrieval-augmented generation and combines the retriever and ranker in a unified framework, which has never been explored before. Furthermore, our experimental results demonstrate that our framework can outperform the two-stage reward model, as evidenced in Table 3 under the "self" column, and in Table 5, where BRIO serves as a reranking model.
> Additionally, the comparison with the paper 'REPLUG: Retrieval-Augmented Black-Box Language Models' is also needed, because it optimizes IR directly using the final metric.
Thanks for your insight question. Here we would like to clarify that the use of the final metric to optimize IR was not first proposed by REPLUG, as we have already included a more relevant baseline, MonoNMT, in our paper. MonoNMT is the pioneering work that employs a learnable IR to enhance retrieval-augmented generation. To our understanding, REPLUG primarily focuses on augmenting **black-box** generation systems with learnable IR, which is not a focusing point of our framework.
Furthermore, we want to emphasize that both MonoNMT and REPLUG, using learnable retriever to fetch better memory, address the primary problem discussed in our paper: better memory prompts better generation. Our work, on the other hand, is the first to explore the dual problem—better generation also prompts better memory—and combines these two problems. Thank you for providing this REPLUG paper; we will incorporate it into our final version.
> Indeed, your perspective holds merit. Exploring this avenue in the future might yield intriguing results.
Thanks for your acknowledgement. We greatly value your expertise and proficiency in retrieval-augmented generation and reward modeling. We hope this additional clarification can address your concerns and merit raising your score. Please kindly let us know if you have any remaining concerns.
---
Rebuttal Comment 1.2:
Title: For Comment 2
Comment: Thanks to the authors address most of my concerns.
> We appreciate the reviewer's concern regarding the latency caused by our method, as mentioned in the limitations section in the Appendix.
I find Appendix only mentions this problem while no detail analyzes or further experiments on the latency. An in-depth quantitative analysis is required.
---
Reply to Comment 1.2.1:
Comment: > I find Appendix only mentions this problem while no detail analyzes or further experiments on the latency. An in-depth quantitative analysis is required.
Thanks for your comment. We have included in-depth quantitative analysis in the global rebuttal pdf file for your reference. | Summary: This paper aims to address the retrieval-augmented text generation problem, where the principle of memory retrieval is to select examples similar to the input. The authors are motivated by an observation in their preliminary experiments that the memory examples that better resemble the data distribution during inference come from the model's output, instead of the training data. Therefore, they propose a novel framework, selfmem to address the retrieval-augmented text generation problem. The proposed framework consists of a memory-augmented generator and a memory selector. Differently, the generator can produce multiple candidates to serve as generated memory instead of retrieved memory from the training corpus. The authors experiment on three generation tasks: machine translation, summarization, and dialogue, and demonstrate performance improvement in various evaluation metrics.
Strengths: 1. The paper provides sufficient related work, which is helpful for readers to understand the research context.
2. The research motivation is well-supported by preliminary experimental results, which is a good starting point for the research.
3. The paper includes thorough experiments covering representative tasks in the generation field, including machine translation, summarization and dialogue generation, which could be a valuable contribution.
Weaknesses: 1. The novelty of the paper is limited. The authors propose a dual structure: Retrieval-augmented Generator and memory selector, each of which is built upon existing paradigms.
2. The overall paper is not well-presented.
(a) Many parts of the paper lack clear descriptions. For example, the setting of the preliminary experimentation is unclear at all.
In L119-120, how does the number 38.89 in Table 1 come from? For example, what does the head of row (Memory, Hypothesis) mean? There is no brief description of this experimental setting in the context. The results in Table 1 are not clear enough to understand the motivation.
(b) Additionally, there are grammar errors and unclear sentence structures in the paper that hinder the reader's comprehension.
In L113-114, "The primary motivation behind our framework stems from the observation that the memory, which is more similar in distribution to the data during inference, is not the training data (38.89 BLEU, as shown in Table 1)." -> "The primary motivation behind our framework stems from the observation that the memory is more similar in distribution to the data during inference, not the training data (38.89 BLEU, as shown in Table 1)."
3. The experimental results are not that convincing as the authors did not provide human evaluation results. Due to the limitations of automatic evaluation metrics, it is usually necessary to conduct human evaluation to assess the quality of generated output in generation tasks.
In this paper, the generation is boosted with the examples that meet the automatic evaluation metrics. This naturally leads to an increase in the numerical values of automatic evaluation metrics. It is still necessary to use human evaluation results to further determine the quality of the generated output.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: Suggestions:
1. In Figure 2, the shading in the square exceeds the boundaries, which needs to be fixed.
2. More captions are needed for Figure 2(b) to better illustrate the content.
Typos & Grammar errors:
1. Please refer to the comments in Weaknesses.
2. In L215, there is an irrelevant word "open" that should be removed.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 2 fair
Contribution: 2 fair
Limitations: I didn't find the relevant discussion is this paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Firstly, we would like to express our sincere gratitude for the time and effort you have dedicated to reviewing our paper. We will address each comment in a point-by-point manner.
- Comment 1: Limited novelty
- Response 1: We acknowledge that our study's two primary components, namely the retrieval-augmented generator and the two-stage reward model, are indeed rooted in existing literature. We have conscientiously cited these sources throughout our paper to duly credit their contributions. However, we contend that the novelty of our work should not be downplayed, analogous to how GAN (Ian J. Goodfellow et al.) should not be viewed as merely a fusion of a generator and a discriminator. Our research, for the first time, delves into the fundamental limitations of bounded memory within the current retrieval-augmented literature and proposes a novel self-uplifting framework by integrating the primal and dual problems. This unique aspect of our study has also been acknowledged by other Reviewers. Furthermore, we have executed a rigorous empirical evaluation across a diverse range of text generation tasks, encompassing three distinct categories, two generation paradigms, and seven datasets. The results reveal that our method consistently outperforms existing works employing either one of the two components.
---
- Comment 2: Unclear description of preliminary experiment and grammar errors.
- Response 2: First, we apologize for any confusion that our presentation may have caused. To clarify, the value 38.89 in Table 1 denotes the average BLEU score of the retrieved memory in relation to the ground-truth reference. In this table, the number under the head(Memory/Hypothesis) refers to the quality of the memory and the hypothesis, as assessed by the BLEU score. The underlying motivation and rationale for this preliminary section can be summarized as follows:
1. Given the primal problem that better memory prompts better generation, we first investigate whether the output from the generation model could directly serve as a more effective memory, considering its higher similarity to the ground truth compared to the retrieved memory (please refer to the first and second rows under the Memory head in Table 1).
2. The results do not support this assumption (as seen in the first and second rows under the Hypothesis head in Table 1), prompting us to consider two potential reasons for this outcome. We then conduct another experiment to eliminate the first possibility (evident in the third row of Table 1) and opt for the second reason, supported by uncertainty analysis. This finding encourages us to avoid selecting memory based on p(y|x) and reaffirms the necessity of a memory selector in our framework.
We hope that this explanation clarifies our motivation and we are more than willing to engage in further discussions to address any potential misunderstandings.
---
- Comment 3: Lack of human evaluation results
- Response 3: We appreciate your suggestion and have included human evaluation results as well as GPT-4 evaluation results in the attached global rebuttal PDF file for your reference.
---
- Comment 4: Lack of discussion of limitation
- Response 4: The discussion can be found in the appendix, which is included in the supplementary zip file.
---
Rebuttal 2:
Comment: Dear Reviewer, I hope this message finds you well. As the discussion period for our paper is nearing its end, we kindly request your response to our rebuttal letter. We value your insights and are eager to address any concerns you may have. Your timely feedback would be greatly appreciated and will help us improve our paper. Thank you for your time and dedication to the review process. | Rebuttal 1:
Rebuttal: Dear Reviewers,
We would like to express our deepest appreciation for the time and effort you have devoted to reviewing our conference paper. In response to the questions raised by each reviewer, we have provided detailed answers in the corresponding sections, and we hope that these clarifications will address some of your concerns.
In light of the shared concerns regarding latency evaluation and human evaluation, we have prepared an additional PDF file that is attached to this response for your reference. We are eager to engage in further discussions about our paper and to continue refining and enhancing our work with the invaluable assistance of esteemed reviewers.
Once again, thank you for your valuable insights, and we look forward to your continued guidance and support.
Sincerely,
Authors
Pdf: /pdf/5568280ecfcb1a148cf8ed3c284d839d92ef5a60.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: The paper proposes an iterative text generation procedure. First, the authors augment the model with retrieval to generate the initial beam of predictions. Then they apply rescoring model (trained separately to maximize BLEU/ROUGE scores) to select the higher-quality predictions. Finally, the authors augment the model with these predictions and perform the next round of generation. The procedure is then repeated multiple times.
Strengths: The paper performs experiments on multiple benchmarks for machine translation and summarization, showing a strong performance of the method. The authors also experiment with several models, including (1) fine-tuned fusion-in-decoder, (2) fine-tuned fusion-in-decoder where input and retrievals encoded separately, (3) frozen decoder-only LLMs
Weaknesses: The paper could be restructured to make it easier to read. Specifically, the paper heavily relies on the notion of "memory." This could be confusing since the paper only briefly explains what specific memory the method is using. For example, do we retrieve parallel sentences from the training data in machine translation? or do we retrieve from the monolingual corpus in source / target languages?
Moreover, I'm not sure I'm a fan of the "memory" positioning of the paper. Does it matter that the first prediction is made using retrieval augmented model?
Finally, it's unclear to me how much gains are due to the reward model and how much gains are due to iterative prediction. For example, while we know that doing four iterations works better than 1 -- would you similar gains from doing 1 iteration but with four times as large beam size? This way, the reward model could pick higher-quality predictions.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: Would it make sense to cite related works on the iterative prediction? For example, some of the potential candidates include
Zemlyanskiy et al., Generate-and-Retrieve: use your predictions to improve retrieval for semantic parsing, COLING 2022
Kumar et al., In-context Examples Selection for Machine Translation, arxiv
What is the "Memory" column in Tables 1 and 3? Does it mean the highest (or average?) BLEU score of the retrieved sample if we use it as a prediction? If that's so, then I'm surprised that such a nearest neighbord -like classifier gets > 38.89 BLEU (Table 1).
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Firstly, we would like to express our sincere gratitude for the time and effort you have dedicated to reviewing our paper. We will address each comment in a point-by-point manner.
- Comment 1: Regarding the explanation and positioning of memory
- Response 1: We maintain that memory plays a crucial role in our system and deserves emphasis. To illustrate this, let us consider a machine translation task as an example. In the first step, we perform a retrieval operation in bilingual corpora, using the retrieved sentence (target side) as our initial memory, as outlined in line 1 of Algorithm 1 and lines 154-155 of the paper. Subsequently, we employ the retrieved memory to train a memory-augmented generator, enabling us to leverage the selected memory in the subsequent iterative process. While it is not essential for the first prediction to be made using a retrieval-augmented model, it is indeed crucial for us to have a retrieval-augmented model to utilize the first prediction (after selection by the memory selector).
---
- Comment 2: Would you achieve similar gains from conducting one iteration but with a beam size four times larger?
- Response 2: We acknowledge the possibility of further improvement by increasing the candidate pool size within a single iteration round. However, we believe that such a setting may not be highly relevant to the main contribution of our paper. As demonstrated in Figure 2 of [1] and Figure 2 of [2], it is a well-established conclusion that a larger candidate pool results in higher performance for a reward model. Our paper distinctly demonstrates that the iterative process within the Selfmem framework can also yield superior outcomes. As illustrated in Figure 3(b) of our paper, the quality of the candidate set (measured in terms of oracle, quartile, average, and minimum scores) consistently improves throughout the iteration process. This result could not be attained solely by increasing the candidate pool size. The core essence of the framework lies in the steady improvement between the generator and reward model. We argue that an expanded candidate set can be viewed as an enhancement of our framework without changing its fundamental nature. Consequently, the potential outcome of an enlarged candidate pool would manifest as an overall upward shift in Figure 3(b).
---
- Comment 3: Missing reference
- Response 3: We appreciate your attention to this matter. We concur that the two references you mentioned are indeed relevant to our paper, particularly the first one by Zemlyanskiy et al. We will ensure their inclusion in the final version of our paper.
---
- Comment 4: Memory column in Table 1 and 3
- Response 4: The memory columns featured in Tables 1 and 3 display the average BLEU scores calculated from the retrieved samples. To mitigate any potential confusion, we will enhance these tables with more detailed captions. The high BLEU score achieved by a nearest-neighbor-like classifier can primarily be attributed to the dataset we have utilized. Our choice is the JRC-Acquis dataset, which consists of parallel legislative texts pertaining to European Union law and applicable to its member states. Owing to its highly relevant and well-structured data, this corpus serves as an exemplary testbed for assessing memory-augmented neural machine translation (NMT) systems. As a result, a significant number of pertinent studies[3][4] make use of this dataset, which is why we have also opted for it.
[1] Ann Lee, et al. Discriminative Reranking for Neural Machine Translation
[2] Liu, Yixin, et al. SimCLS: A simple framework for contrastive learning of abstractive summarization
[3] Gu, Jiatao, et al. Search engine guided neural machine translation
[4] Cai, Deng, et al. Neural machine translation with monolingual translation memory
---
Rebuttal 2:
Comment: Dear Reviewer, I hope this message finds you well. As the discussion period for our paper is nearing its end, we kindly request your response to our rebuttal letter. We value your insights and are eager to address any concerns you may have. Your timely feedback would be greatly appreciated and will help us improve our paper. Thank you for your time and dedication to the review process. | null | null | null | null | null | null |
DoReMi: Optimizing Data Mixtures Speeds Up Language Model Pretraining | Accept (spotlight) | Summary: This paper proposes to pretrain language models (LMs) by first automatically learning domain weights using a small proxy model and then pretraining a large model under the learned weights. The proposed method, Domain Reweighting with Minimax Optimization (DoReMi), improves pretraining perplexity across all domains and results in better downstream task accuracy with better efficiency.
Strengths: * Originality: The paper studies an interesting direction in seeking the optimal pretraining data mixture/weighting. Both the angle and the proposed method are novel.
* Quality: The method is generally well-designed to accomplish the goal discussed and there have been plenty of experiment results that analyze the effects of DoReMi, but I feel that the paper has not sufficiently motivated the necessity and benefits of learning domain weights (see weaknesses below).
* Clarity: The paper is overall clear.
* Significance: The problem tackled in this paper (i.e., automatic data selection/weighting in pretraining) is important, and the paper shows that the method can improve pretraining perplexity and certain downstream task performance, which can be considered moderately significant. However, there are concerns regarding whether the evaluations are comprehensive w.r.t. tasks and model scales (see weaknesses below).
Weaknesses: * Insufficient motivation: While it's well acknowledged that the data quality is variable across different domains, it's unclear to me whether learning domain weights and using them to construct a "better" corpus is the appropriate way. First, domains are quite coarse partitions of the data, and I'm not very convinced that assigning a single scalar weight to each domain as a whole is an ideal setup. As the authors mentioned, some data might be noisy that should be down-weighted, but shouldn't this be done at an instance level? For example, some code snippets from Github might be erroneous but others are correct. If the entire Github domain is down-weighted, it doesn't really tell apart the clean vs. noisy data, but instead puts a lower priority on learning all code-related data. Second, the authors do not seem to mention or compare with a naive baseline that directly removes noisy domains from the pretraining corpus. For example, what if only high-quality data are used? Gunasekar et al. (I'm aware that this paper came out after the paper submission deadline, but it seems relevant) showed that textbook-quality data with only 6B tokens are sufficient to train good models.
* Unclear generalization ability to large model scales: Although the authors conduct experiments across different model scales, the largest model size tested is 8B. I understand that it's very expensive to train even larger models, but for a paper studying pretraining, I'd expect to see results on larger scales (e.g., 65B), considering that the model performance has a strong correlation with model sizes. This concern appears imminent given that the proxy model does not seem to scale to larger models.
* Choice of evaluation tasks: The paper mainly evaluates on pretraining perplexity and QA-related tasks. For a paper that aims at "models that perform well on all domains", I believe the evaluation should have a more comprehensive coverage of tasks, such as coding and reasoning, especially considering that these tasks are relevant to certain domains in the pretraining corpora (e.g., Github). I'd be curious to know if the performance on code completion will be still higher than the baseline if the Github domain is down-weighted.
* (Minor) As I understand, the appendix should be submitted separately from the main paper.
Reference:
Gunasekar et al. “Textbooks Are All You Need.” 2023.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: * How to set the size of the proxy model given the main model size?
* How do the learned domain weights by different sizes of proxy models look like?
Please also clarify any misunderstandings in my review.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Please refer to the Weaknesses section.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the feedback. **ajJF notes that “the angle and the proposed method are novel” and “generally well-designed”, with “plenty of experiment results”.** We address specific questions below:
> “domains are quite coarse partitions of the data, and I'm not very convinced that assigning a single scalar weight to each domain as a whole is an ideal setup. As the authors mentioned, some data might be noisy that should be down-weighted, but shouldn't this be done at an instance level?”
Despite their coaseness, **we already find a significant improvement from reweighting these coarse domains, and addresses an immediate practical need for determining these domain weights**. This suggests that **a future promising direction is to define more fine-grained domains** to further increase the power of reweighting. We note this in the discussion section and will clarify in the revision.
> “the authors do not seem to mention or compare with a naive baseline that directly removes noisy domains from the pretraining corpus. For example, **what if only high-quality data are used? Gunasekar et al. (I'm aware that this paper came out after the paper submission deadline, but it seems relevant) showed that textbook-quality data with only 6B tokens are sufficient** to train good models.”
- **Our paper’s comparisons reflect common ways to assess the quality of domains (which is a difficult problem): 1) by intuition (The Pile) or 2) by tuning on downstream tasks (GLaM dataset).**
- In particular, on the GLaM dataset we compare against domain weights that were tuned using the downstream tasks that we evaluate on as an oracle. Without any access to the downstream tasks, we are able to get a similar performance. **In particular, part of GLaM’s tuning process is to evaluate the performance of models trained on each single domain (including books only or web only). Thus we believe our comparison to downstream-tuned weights is optimal, and in particular a stronger comparison than the reviewer's suggestion.**
- Furthermore, we believe that Gunasekar et al., which came out after the paper submission deadline, can focus the pretraining dataset on (coding) textbooks since they are training more specialized models for code.
> “Although the authors conduct experiments across different model scales, the largest model size tested is 8B. I understand that it's very expensive to train even larger models, but for a paper studying pretraining, I'd expect to see results on larger scales (e.g., 65B)”
Although we would like to train 65B models, **due to compute limitations we instead show that DoReMi brings benefits at a variety of scales from 280M to 8B, without diminishing returns**. This suggests that DoReMi will scale well to even larger models (analogous to scaling laws).
> “For a paper that aims at "models that perform well on all domains", I believe the evaluation should have a more comprehensive coverage of tasks, such as coding and reasoning … I'd be curious to know if the performance on code completion will be still higher than the baseline if the Github domain is down-weighted.”
For broad evaluation, we find that **DoReMi improves perplexity on all pretraining domains**. Even though the Github domain is downweighted, we find an improvement in perplexity / next-token prediction, which is closely related to code completion and typically tracks downstream accuracy.
> “How to set the size of the proxy model given the main model size?”
Given that the weights are able to transfer across 30x larger model scales, we suggest **using a fixed proxy model size (e.g. 280M)** for any main model size. We thank the reviewer for the important practical question and will add a discussion in the final revision.
> “How do the learned domain weights by different sizes of proxy models look like?”
We compare the domain weights from 280M and 1B proxy models below. With a 280M proxy model, most of the weight is put on the Pile-CC web text domain, while DoReMi with a 1B proxy model puts most of the weight on OpenWebText2. The overall pattern of the domain weights for the rest of the domains are similar. This suggests there may be multiple local minima in domain weight space, especially when there are some similar domains (such as Pile-CC vs OpenWebText2).
| | Baseline | DoReMi (280M) | DoReMi (1B) |
|-------------------|---------:|--------------:|------------:|
| Pile-CC | 0.1121 | 0.6057 | 0.1199 |
| PubMed Central | 0.1071 | 0.0046 | 0.0149 |
| Books3 | 0.0676 | 0.0224 | 0.0739 |
| OpenWebText2 | 0.1247 | 0.1019 | 0.3289 |
| ArXiv | 0.1052 | 0.0036 | 0.0384 |
| Github | 0.0427 | 0.0179 | 0.0129 |
| FreeLaw | 0.0386 | 0.0043 | 0.0148 |
| StackExchange | 0.0929 | 0.0153 | 0.0452 |
| USPTO Backgrounds | 0.0420 | 0.0036 | 0.0260 |
| PubMed Abstracts | 0.0845 | 0.0113 | 0.1461 |
| Gutenberg (PG-19) | 0.0199 | 0.0072 | 0.0250 |
| OpenSubtitles | 0.0124 | 0.0047 | 0.0017 |
| Wikipedia (en) | 0.0919 | 0.0699 | 0.0962 |
| DM Mathematics | 0.0198 | 0.0018 | 0.0004 |
| Ubuntu IRC | 0.0074 | 0.0093 | 0.0044 |
| BookCorpus2 | 0.0044 | 0.0061 | 0.0029 |
| EuroParl | 0.0043 | 0.0062 | 0.0078 |
| HackerNews | 0.0075 | 0.0134 | 0.0058 |
| YoutubeSubtitles | 0.0042 | 0.0502 | 0.0159 |
| PhilPapers | 0.0027 | 0.0274 | 0.0063 |
| NIH ExPorter | 0.0052 | 0.0063 | 0.0094 |
| Enron Emails | 0.0030 | 0.0070 | 0.0033 |
> (Minor) As I understand, the appendix should be submitted separately from the main paper.
We thank the reviewer for pointing this out, and will correct it in the final revision.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for their response. Some of my concerns (e.g., the omission of the naive baseline) are addressed. Although I do hope to see a finer-grained partition of the domains as well as some concrete downstream task results (instead of merely perplexity-based metrics), especially on the downweighted domains (e.g., Github), these concerns are relatively minor considering the paper's high novelty. Hence, I have updated my overall rating. | Summary: This paper introduces a significant advancement by exploring the topic of data mixture proportions during pre-training, which holds great importance. Determining how to sample pre-trained data from diverse sources to achieve balanced results is a fundamental question. Previous approaches have often relied on intuitive-based weights or extensive experimentation to select a setting. However, these approaches either require extensive computational resources or lack generalization across different settings. Therefore, finding the optimal mixture settings using a smaller model is an intriguing and crucial question.
To address this, the authors propose a group-DRO-based method for determining the weights. The weight is dynamically adjusted during the learning process, and the final weight is selected as the sampling weight. Additionally, the authors conducted extensive experiments, providing substantial evidence to support the effectiveness of the proposed approach.
Strengths: The direction is really important and the general framework is valuable that uses a small network to get the best practice and apply it on larger models.
The proposed method is clearly-written, and the improvements it offers have been convincingly substantiated through extensive experiments.
The general idea is also very elegant. If my understanding is correct, the method generally assigns larger weights to the data that can be learned in the future, and assigns smaller weights to those too easy or difficult data. The general idea is cool.
Weaknesses: The selected weights vary significantly across different models with varying scales.
I still have some concerns about the final implementation. Following the optimization objectives, the authors use the learned weight as the re-sampling weight. It is a little bit strange.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: For instance, the selection of the best weight differs among models with different scales. Does this imply the existence of multiple sub-optimal weight candidates? Moreover, I still do not comprehend why the authors do not apply the final model during the learning process but instead utilize the final weights for re-weighting purposes.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 4 excellent
Limitations: n/a
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the feedback. Overall, uiyj feels that the “the direction is really important”, “the general framework is valuable”, and “the improvements it offers have been convincingly substantiated through extensive experiments”. We address specific questions below:
> “the selection of the best weight differs among models with different scales. Does this imply the existence of multiple sub-optimal weight candidates?”
**We believe that there is a frontier of solutions, especially when there are domains that are similar (e.g., OpenWebText and Pile-CC).** For example, we compare the domain weights from 280M and 1B proxy models below. With a 280M proxy model, most of the weight is put on the Pile-CC web text domain, while DoReMi with a 1B proxy model puts most of the weight on OpenWebText2. The overall pattern of the domain weights for the rest of the domains are similar. We thank the reviewer for the question and will include this discussion in the final revision.
| | Baseline | DoReMi (280M) | DoReMi (1B) |
|-------------------|---------:|--------------:|------------:|
| Pile-CC | 0.1121 | 0.6057 | 0.1199 |
| PubMed Central | 0.1071 | 0.0046 | 0.0149 |
| Books3 | 0.0676 | 0.0224 | 0.0739 |
| OpenWebText2 | 0.1247 | 0.1019 | 0.3289 |
| ArXiv | 0.1052 | 0.0036 | 0.0384 |
| Github | 0.0427 | 0.0179 | 0.0129 |
| FreeLaw | 0.0386 | 0.0043 | 0.0148 |
| StackExchange | 0.0929 | 0.0153 | 0.0452 |
| USPTO Backgrounds | 0.0420 | 0.0036 | 0.0260 |
| PubMed Abstracts | 0.0845 | 0.0113 | 0.1461 |
| Gutenberg (PG-19) | 0.0199 | 0.0072 | 0.0250 |
| OpenSubtitles | 0.0124 | 0.0047 | 0.0017 |
| Wikipedia (en) | 0.0919 | 0.0699 | 0.0962 |
| DM Mathematics | 0.0198 | 0.0018 | 0.0004 |
| Ubuntu IRC | 0.0074 | 0.0093 | 0.0044 |
| BookCorpus2 | 0.0044 | 0.0061 | 0.0029 |
| EuroParl | 0.0043 | 0.0062 | 0.0078 |
| HackerNews | 0.0075 | 0.0134 | 0.0058 |
| YoutubeSubtitles | 0.0042 | 0.0502 | 0.0159 |
| PhilPapers | 0.0027 | 0.0274 | 0.0063 |
| NIH ExPorter | 0.0052 | 0.0063 | 0.0094 |
| Enron Emails | 0.0030 | 0.0070 | 0.0033 |
> “I still do not comprehend why the authors do not apply the final model during the learning process but instead utilize the final weights for re-weighting purposes.”
**We do not use DRO to directly train the large model because it is a more expensive training procedure** that evaluates the losses of two equally sized models (proxy and reference) at every training step. **Instead, we do the expensive DRO training at a small scale, transferring the benefits to the large model through the optimized domain weights.** This also preserves the standard training procedure for the large model. We apologize for the confusion and will clarify in the final revision.
---
Rebuttal Comment 1.1:
Comment: Thanks for the response. I will keep my score. | Summary: The authors proposed DoReMi for optimizing the mixture proportions of pretraining data domains when training language models (LMs). The authors demonstrate that DoReMi, which utilizes a small proxy model trained via group distributionally robust optimization (Group DRO), can be used to determine optimal domain weights without knowledge of downstream tasks. Subsequently, these weights are used to resample a dataset for training a larger model. Experimental results indicate significant improvements in LM performance using DoReMi, including a 6.5% increase in average few-shot downstream accuracy and achieving baseline accuracy with 2.6x fewer training steps.
Strengths: - DoReMi offers a unique and efficient way to determine domain weights, speeding up LM training and improving accuracy. Moreover, the down-weighted domains are not getting a worse performance, which is surprising.
- It also shows that the domain weights determined by DoReMi are transferable across a broad range of model scales, compute budgets, and other training hyperparameters, making it wide applicability.
- Overall, the idea of DoReMi is simple, and the results can show its effectiveness.
Weaknesses: - Lack of the baselines: In this paper, the baselines are the LMs trained on the original data distribution of Pile, but there should be some simple but stronger baselines as well, like calculating the lexical overlap within each domain and assigning weights to maximize the lexical diversity. Although this simple baseline sounds naive, we still need to justify that their improvement would not be as much as DoReMi.
- DoReMi needs two or more proxy LM training processes in order to obtain the domain weights. However, the authors need to justify that there doesn't exist simpler but equally effective heuristics that only need one proxy LM training, such as the number of example forgetting times [1].
- Based on Figure 6, it seems that the choice of proxy model size is very crucial. For 8B model, the 280M proxy is significantly better than other larger or smaller proxy models. It would be better if the author can provide explanation or principles to select the size of the proxy model, otherwise, people who want to use DoReMi actually need to empirically run different proxy models in order to get better performance.
[1] An Empirical Study of Example Forgetting during Deep Neural Network Learning. Mariya Toneva, Alessandro Sordoni, Remi Tachet des Combes, Adam Trischler, Yoshua Bengio, Geoffrey J. Gordon. ICLR 2019
https://arxiv.org/abs/1812.05159
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: - For the domain weight $\alpha$, I am wondering why we need gradient ascent to update it? Isn't it already optimal if we simply assign 1.0 for the domain with the highest excess loss and 0.0 for all the other domains? If the reason is that we need a single set of consistent domain weights till the end of the training, we could also collect the weights along the training process and average them at the end. Did you try this before?
- I am curious whether there are simple heuristics that can achieve the same level of effects. For example, in [1], people found that the number of forgetting times of a training example is a good indicator of how important each training example is. We need this kind of baseline to prove that the 2 runs in DoReMi are necessary and its result is more effective than others.
[1] An Empirical Study of Example Forgetting during Deep Neural Network Learning. Mariya Toneva, Alessandro Sordoni, Remi Tachet des Combes, Adam Trischler, Yoshua Bengio, Geoffrey J. Gordon. ICLR 2019
https://arxiv.org/abs/1812.05159
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 4 excellent
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the feedback. 5e3R notes that “DoReMi offers a unique and efficient way to determine domain weights” with “wide applicability”. We address specific questions below:
> “the baselines are the LMs trained on the original data distribution of Pile, but there should be some simple but stronger baselines as well, like calculating the lexical overlap within each domain and assigning weights to maximize the lexical diversity.”
- Our paper’s comparisons reflect common ways to assess the quality/importance of domains: 1) by intuition (The Pile) or 2) by tuning on downstream tasks (GLaM dataset).
- In particular, on the GLaM dataset, we compare against domain weights that were tuned using the downstream tasks that we evaluate on as an oracle. Without any access to the downstream tasks, we are able to get a similar performance. **We believe the comparison to downstream-tuned weights is optimal and in particular, stronger than the simple baseline proposed by the reviewer.**
> “DoReMi needs two or more proxy LM training processes in order to obtain the domain weights. However, the authors need to justify that there doesn't exist simpler but equally effective heuristics that only need one proxy LM training, such as the number of example forgetting times [1].”
- Overall, the amount of compute used to train a small proxy model is much smaller than the compute needed to train a large model (2.5% in our 280M to 8B experiment), so that **saving compute in the small proxy model training step only results in a small compute benefit.**
- **In many practical settings, there may already exist a pretrained reference model** that can be used, reducing the number of proxy LM training processes to 1.
> “It would be better if the author can provide explanation or principles to select the size of the proxy model, otherwise, people who want to use DoReMi actually need to empirically run different proxy models in order to get better performance.”
- We believe that we can use a fixed proxy model size (e.g. 280M) to find the domain weights for any main model size, given that the weights are able to transfer across 30x larger model scales. We thank the reviewer for the important practical question and we will add more discussion on this in the final revision.
> “For the domain weight α, I am wondering why we need gradient ascent to update it? Isn't it already optimal if we simply assign 1.0 for the domain with the highest excess loss and 0.0 for all the other domains?”
Although assigning all the weight to the domain with the highest excess loss is optimal for the inner maximization, **it can result in unstable training for the outer minimization** since it reduces the number and diversity of examples in the minibatch. Instead, we follow Sagawa et al. (https://arxiv.org/abs/1911.08731) and employ a mirror descent-based DRO optimizer that updates the domain weight α smoothly while maintaining optimality guarantees.
---
Rebuttal Comment 1.1:
Title: Thanks for the responses
Comment: The responses well addressed my questions. I would keep my score of 7 Accept. | Summary: This paper introduces DoReMi, a method for automatically deriving optimal/improved domain weights for aggregated pretraining datasets for LLMs. DoReMi works in three 3: (1) train a small reference model to use for the excess loss in DRO; (2) train a small proxy model with DRO to obtain optimised domain weights; (3) use the optimised weights with a larger model. The authors demonstrate their method leads to significant improvements on The Pile, and also matches manual tuning on GLaM. In ablations, the scaling behaviour is studied, showcasing that proxy model size improves downstream performance up to ~280M parameters, after which improvements degrade as the larger models are inadequately trained by DRO.
Strengths: * **S1.** Automatic optimisation of domain weights for pretraining datasets is an incredibly valuable contribution for the LLM community. Standard procedures have been mostly based on expensive manual tuning, and have not necessarily been principled in their finding.
* **S2.** The paper is well grounded in works around DRO, and adequately position itself -- diverging from existing methods when necessary. The authors also take care in pointing out current limitations of the DRO approach, and potential for future improvements.
* **S3.** The results obtained on The Pile reproduce the observations recently made by the RedPjamas & RefinedWeb datasets: some components of The Pile should ideally be downsampled, and increased web data may be beneficial. The fact that DoReMi repeatedly reproduces results obtained from manual tuning is a good validation of the method.
* **S4.** The paper is well-written and presented, and easy to follow.
Weaknesses: * **W1. The evaluation setup could be broader.** The authors evaluate downstream performance: (1) in 1-shot; (2) in a generative/exact-match setting; (3) on 5 tasks.
* **W1.1.** The choice of a generative exact-match setting is strange for models in the ~100M-1B range, as they consistently struggle with exact match. Instead, for small models, leveraging logprob-based evaluation of multiple choices is more common, and can deliver stronger signal.
* **W1.2.** The choice of task is arbitrary. Rather than these 5 tasks, the authors could have evaluated on the full set of GPT-3 tasks, or on a popular benchmark such as HELM or BigBench (for the larger models).
* **W2. The poor scaling behaviour of DoReMi past 280M parameters for the proxy model is a concern for robustness.** Notably, this might make it difficult for practitioners to apply DoReMi as a "set it and forget it" method. Some level of manual inspection & analysis is required, which holds back the method from fully delivering on its promise of completely automated domain weight optimization. However, I appreciate that the authors discuss this limitation openly and propose some potential ideas for further improvements in this direction.
* **W3.** (smaller nits)
* **W3.1.** The authors showcase in Table 1 the baseline & DoReMi domain weights on The Pile; the presentation of the table could be improved, to better identify which domains are upsampled and which are downsampled -- this could be as simple as sorting the domains and explicitly providing the up/downsampling value in the table.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: This is an excellent paper introducing a method which could see wide adoption in the community as a way to improve aggregated pretraining datasets. The paper is well-written, and opens the door to numerous exciting follow-up works. Accordingly, I would currently rate it as a **Strong Accept (8)**. Note that should my concerns about evaluation be addressed, I would be willing to further increase my score to a 9/10 -- this paper has significant potential for the community, and my only main concern currently is regarding the robustness of the selected evaluation setup.
* **Q1.** (W1.1.) Could the authors better justify their choice of going with generative exact match for the evaluation?
* **Q2.** (W1.2.) Could the authors provide scores (in 0-shot or 1-shot) using logprob-based multiple choices instead?
* **Q3.** (W1.2.) Could the authors provide evaluation results on additional tasks, such as the full set of GPT-3 tasks, or scores on a popular benchmark such as HELM or BigBench? (this applies mostly to the headline 8B models).
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 4 excellent
Limitations: The authors extensively discuss limitations of their work in a dedicated section and provide interesting pointers for further research.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the feedback. VPSZ felt that the method “is an incredibly valuable contribution”, “reproduces results obtained from manual tuning” from the community, and “could see wide adoption”. We address specific questions below:
> “The choice of a generative exact-match setting is strange for models in the ~100M-1B range, as they consistently struggle with exact match. Instead, for small models, leveraging logprob-based evaluation of multiple choices is more common, and can deliver stronger signal… Rather than these 5 tasks, the authors could have evaluated on the full set of GPT-3 tasks, or on a popular benchmark such as HELM or BigBench (for the larger models).”
**We evaluated on generative exact-match tasks since that is how the models are typically used** (prompting, rather than scoring logprobs). We note that the GPT-3 paper finds that performance on TriviaQA, for example, scales smoothly with model size even at the 100M-1B range, and tracks other benchmarks well. For broad evaluation, we show that DoReMi improves the validation perplexity on all domains, which typically has a strong relationship with downstream performance.
> “The poor scaling behaviour of DoReMi past 280M parameters for the proxy model is a concern for robustness.”
VPSZ is referring to the scaling behavior of DoReMi with respect to proxy model size, where larger (1B) proxy models were found to be less well optimized with DRO. Overall, **we saw gains across all the proxy model sizes**, but we agree with VPSZ that improving the DRO optimization for larger proxy models is an important future step.
> Nits: table presentation
We thank the reviewer for the suggestions, and will include them in the final revision.
---
Rebuttal Comment 1.1:
Title: Answer to rebuttal
Comment: First, I would like to thank the authors for taking time to write a rebuttal to each reviewer.
Based on the rebuttal and the other reviews, I will maintain my score of a **Strong Accept (8)**. | null | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: This work buils upon prior studies’ empirical findings, emphasizing how the composition of pretraining data affects the performance of Language Models (LMs). To avoid reliance on heuristic or iterative performance measurements on downstream tasks, this work introduces a method that employs a trainable model, capable of assigning appropriate weights to each “domain” of pretrainng data.
The proposed approach leverages the concept of Distributionally Robust Optimization (DRO) to train a small proxy model that learns to assign weights to each domain, thereby minimizing the worst-case excess losses. Subsequently, a larger model is trained using pretraining data that has been adjusted by these optimized domain weights.
The authors demonstrate through experiments that their proposed method significantly accelerates the pretraining process by showing the few-shot accuracy on factual QA tasks. They show that the model trained with optimized pretraining data reaches the performance level of baseline models much faster.
Strengths: - The idea to optimize the pre-training data sounds clever, particularly their use of a proxy model trained with DRO. This offers a novel approach to learning the optimal data distribution without the costly estimation of the large pre-trained model’s task performance under different pretraining data settings.
- This work provides solid experimental evidence, exploring a variety of settings including various proxy and main model sizes, two widely-used pretraining datasets (GLaM dataset and The Pile), and different domain compositions. These experiments adequately address questions that arise during a review of the paper to some extent.
- This research yields some interesting observations. According to Tables 1 and 2, the webpage domain carries the largest weights across both pretraining datasets. This suggests that optimizing the model with respect to the webpage domain could potentially reduce excess loss in other domains. Consequently, pretraining large models with an optimized dataset results in lower perplexities across all domains. Interestingly, despite a decrease in weight for Wikipedia, task performance on tasks derived from it (TriviaQa, NaturalQuestions) improves (Table 5 in Appendix).
These insights could prove beneficial for other researchers in the field.
Weaknesses: - The diversity of downstream tasks examined is relatively narrow, given the paper’s claim.
While the experiments in this paper present solid empirical evidence that the proposed method improves language model performance, the evidence primarily focuses on language modeling and factual question answering tasks in a general domain. As such, the claim that the proposed method “speeds up language model pretraining” might mislead readers, since the evaluation of pretraining should be more rigorous. For example, incorporating experimental results on MMLU [1] or commonsense reasoning (ARC[2], CSQA[3], etc) could increase the credibility of the claim that the pre-trained model with the proposed method generally outperforms previous approaches.
[1] Measuring Massive Multitask Language Understanding
[2] Think you have Solved Question Answering? Try ARC, the AI2 Reasoning Challenge
[3] Complex Sequential Question Answering: Towards Learning to Converse Over Linked Question Answer Pairs with a Knowledge Graph
Technical Quality: 2 fair
Clarity: 4 excellent
Questions for Authors: 1) Why did the authors only experiment with a 280M main model in Figure 6 (right)? I understand the potential cost issues, but I believe it’s essential to show that DoReMi outperforms the webpage-only setting, even with a larger model.
2) Could you clarify how the main model employs domain weight in detail (Step 3 in Section 2)? I was unable to find detailed information on this in the paper. Does the model sample instances in the mini-batch according to the domain weight during pre-training?
Note: The appendix should be submitted as the separated supplementary file.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 4 excellent
Contribution: 3 good
Limitations: The authors discuss limitations in Section 6. However, the limitations regarding the evaluation methods for pretraining are not thoroughly addressed in the paper. Please refer to the “Weaknesses” section for more details on my perspective.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the feedback. MeFt felt that the paper presented a “novel approach to learning the optimal data distribution”, provides “solid experimental evidence, exploring a variety of settings”. We address specific questions below:
> “While the experiments in this paper present solid empirical evidence that the proposed method improves language model performance, the evidence primarily focuses on language modeling and factual question answering tasks in a general domain…For example, incorporating experimental results on MMLU [1] or commonsense reasoning (ARC[2], CSQA[3], etc) could increase the credibility”
We evaluated on generative exact-match tasks since that is how the models are typically used (prompting, rather than scoring logprobs). We note that the GPT-3 paper finds that performance on TriviaQA, for example, scales smoothly with model size even at the 100M-1B range, and tracks other benchmarks well. For broad evaluation, we show that DoReMi improves the validation perplexity on all domains, which typically has a strong relationship with downstream performance.
> “ Why did the authors only experiment with a 280M main model in Figure 6 (right)? I understand the potential cost issues, but I believe it’s essential to show that DoReMi outperforms the webpage-only setting, even with a larger model.”
- We conducted the ablations of the excess loss objective in Fig 6 using 280M models due to the high cost of running large-scale experiments.
- With regards to the webpage-only comparison: On the GLaM dataset, we compare against GLaM domain weights that were tuned using the downstream tasks that we evaluate on as an oracle. Without any access to the downstream tasks, we are able to get a similar performance. **Part of GLaM’s process of tuning against downstream tasks is to evaluate the performance of models trained on each single domain. Thus we believe the comparison to downstream-tuned weights is a stronger comparison than the webpage-only baseline proposed by the reviewer.** We thank the reviewer for the question and will clarify this in the final revision.
> “Could you clarify how the main model employs domain weight in detail (Step 3 in Section 2)? … Does the model sample instances in the mini-batch according to the domain weight during pre-training?”
The main model trains in a standard way with data sampled from the optimized domain weights. As the reviewer mentions, the data loader samples instances from each domain according to the domain weights, and then combines these instances into a minibatch on-the-fly during pre-training. This can be implemented efficiently with separate data queues for each domain. By sampling on the fly, the reweighted dataset never has to be materialized. We apologize for any confusion and will clarify in the final revision.
> “Note: The appendix should be submitted as the separated supplementary file.”
We thank the reviewer for the note and will update this in the final revision.
---
Rebuttal Comment 1.1:
Comment: Thank you for the rebuttal. I will maintain my score. | null | null | null | null | null | null |
Entropy-based Training Methods for Scalable Neural Implicit Samplers | Accept (poster) | Summary: This article studies the problem of learning implicit samplers, for which the score function is not available, and therefore the KL (or the Fisher) divergence cannot be explicitly computed. They introduce an alternating procedure, where, for a fixed sampler parameter $\theta$ they first learn the surrogate score $s_{\phi}$ of the implicit sampler, and then use this score to minimize the divergence with respect to the target. An equivalent between Fisher divergence training and the min-max approach to minimizing Stein's discrepancy is proven. The attractivity of the method is illustrated on standard benchmarks.
Strengths: This article is an original application of score-matching methods to implicit samplers. The procedure is well-motivated, grounded, and, to the best of my knowledge novel. The method is also fairly significant in its somewhat general applicability. The sampling application is relevant to a large part of the community. The article is further pretty well written and only requires minimal changes from a presentation/clarity standpoint.
Weaknesses: The performance of the method is evaluated only in terms of sampling accuracy, never in terms of the total cost *including training*. As a consequence, I don't believe that HMC and SGLD are fair competitors given that they provide samples immediately, with no slow training procedure.
More related are the works considering warping the target distribution, such as https://arxiv.org/abs/1903.03704, https://arxiv.org/abs/2107.08001, https://invertibleworkshop.github.io/INNF_2020/accepted_papers/pdfs/24.pdf, https://arxiv.org/abs/2210.10644.
Understanding the competitiveness of the proposed approach with respect to these more related methods both in terms of ease of training and in terms of samples quality post training would make the article a lot stronger.
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: ### What you can do to improve my score
The empirical evaluation is, in my opinion, the weakest part of this, otherwise, well-written and motivated article. Effectively comparing with the relevant sampling literature would improve the empirical soundness and make for a stronger acceptance case.
### Some questions/remarks
- I am very confused by the paragraph on performance l.287 onwards. Can you please reformulate it?
- It would be good to explain how you mimic the FID exactly.
#### Minor comments
- l43-44: please add a specific reference when introducing the neural implicit sampler.
- [24] is by Hyvärinen only. Dayan was the editor of the paper. This seems to come from Google Scholar. Please do verify the rest of the citations for similar issues with Google Scholar referencing.
- In some places, you use "sg" vs. the dagger superscript to denote the stop-gradient operator, it would be better to stick to one only.
- I don't really see the point of A.1, this is well-known (for instance from the log-derivative trick) and should be replaced by a citation in this effect. Also, what is the point of putting in the conditions for Lebesgue's theorem to apply if you don't verify them for your models?
- Please use \eqref rather than \ref for equation referencing.
### Typos
A didn't notice many
- Algorithm 1: dependecne
- Capitalization in citations is off (people's names are often written without upper case: stein -> Stein, markov -> Markov, etc)
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 3 good
Contribution: 3 good
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your valuable suggestions. We will address your concerns one by one in the following paragraphs.
**Q1-2**. Stronger evaluation. Compared to other baselines.
**A1 and A2**.
(1) It is truly a limitation that neural samplers require additional training, so we think it necessary to compare our neural samplers with other neural samplers other than MCMCs. We also agree that your mentioned work, i.e. neural-enhanced MCMC samplers, are strong competitors which use (invertible) neural mapping to enhance MCMC algorithms for faster mixing or improved performances. However, due to the time limitations of the rebuttal period, we choose to first compare our one-shot neural samplers to other alternative one-shot neural samplers such as KSD-NS and SteinGAN to show the pros and cons of our neural samplers. We are willing to compare more neural-enhanced MCMC samplers in the revision. To this end, we conduct one more experiment on 6 more 2D target distributions. The results, details, and analysis are summarized in **Table 1** in the global author rebuttal cell. In the new experiment, we compare our neural samplers with 3 MCM baselines: SVGD, LD, and HMC as MCMC samplers; 1 explicit baseline: coupling flow model; and 2 implicit samplers: KSD-NS and SteinGan.
This experiment shows that the trained neural samplers perform comparable with (slightly better than) LD and HMC samplers, while the sampling speed of neural samplers is 500 times faster than MCMC samplers (1 versus 500 iterations). Compared with other one-shot neural samplers, the KL-NS performs significantly better across all targets. We are willing to compare more neural-enhanced MCMC samplers to get a more thorough understanding of our one-shot neural samplers in the revision.
(2) Besides the experiments on low-dimensional 2D targets, we also run another new experiment that tried to compare KL-NS and Fisher-NS with alternative KSD-NS and SteinGAN on high-dimensional EBM targets as in Section 4.2 in the main text. However, we find that the Fisher-NS and KSD-NS do not converge for high-dimensional MNIST targets, so they are considered not scalable for high dimensions targets in practice. This limitation of scalability has also been discussed in KSD-NS's original paper [1]. Besides, we find that SteinGAN shows strong mode-collapse behavior, that the generated samples collapse to a certain model of the target distribution, such as the number "5" or "1" in the MNIST target. We implement the SteinGAN with pixel-space RBM kernel, by viewing each image as a vector. Maybe a more complex "image space" kernel can resolve the mode-collapse issue, and we plan to continue to explore the comparison of SteinGAN and KL-NS. Overall, we find that KL-NS is not bothered by mode-collapse behavior when scaled to image space, while the SteinGAN currently is bothered by mode-collapse behavior.
**Q3**. paragraph on performance l.287 onwards
**A3**. In line 287 we mean to show the computational costs of sampling by running a one-shot neural sampler and running annealed Langevin dynamics MCMC to sample from the neural EBM. The Langevin dynamic requires the score function in each iteration, so we need to forward pass the neural EBM and then backward through it to obtain the score function. The FLOPs account for the computational cost (the number of operations) of the forward pass of a neural network. In Table 3 in the main text, we show that the trained one-shot neural sampler has a total FLOP of 1.11G, which is about 80 times more efficient than using annealed Langevin MCMC to sample from the EBM (0.58G×150).
**Q4**. It would be good to explain how you mimic the FID exactly.
**A4**. We mimic the concepts of FID by pre-training a wide-resnet classifier [4] on MNIST data with 99+ pct accuracy. We then extract the features (features before feeding into the final linear layers) from the pre-trained classifier and compute the Wasserstein distance between generated samples and the training data.
**Q5**. Writing and references
**A5**. We thank you for your suggestions on the writing and references of our work. We will revise them in the revision.
Thank you for your valuable comments. We hope our answers have resolved your concerns, and if you still have any concerns, please do let us know.
[1] Stein Neural Samplers
[2] Learning to Draw Samples: With Application to Amortized MLE for Generative Adversarial Learning
[3] Coin Sampling: Gradient-Based Bayesian Inference without Learning Rates
[4] Wide Residual Networks
---
Rebuttal Comment 1.1:
Title: Rebuttal acknowledgement
Comment: I have reviewed your response to my comments and other reviewers, in particular to Reviewer Dhq1. As it stands, I don't think that the experimental weakness has been resolved clearly enough for me to increase my score and I believe that the article should go through a major revision. I however do like some of the methodological contributions, which is why I would not mind if this paper was in fine accepted.
---
Reply to Comment 1.1.1:
Title: More discussions on efficiency and scalability. (part 1)
Comment: Thank you for your response. We are glad that you like the methodological contributions of our work. We acknowledge that on toy examples, the advantage of our neural samplers over MCMC samplers is not significant enough, because, for such simple low-dimensional targets, the MCMC samplers can perform quietly well.
Before we give further discussions on neural samplers, we would like to emphasize our motivation for introducing neural samplers: **the inference efficiency and the scalability**, which are enhanced by incorporating neural networks. As is shown in the main text and our rebuttal, the KL-NS has shown high efficiency and scalability for high-dimensional targets. This is a main contribution of our work that not only verify the ability of KL-NS but also have a high potential for learning to sample from modern large-scale target distributions such as large-scale EBM or even diffusion models.
**Efficiency**:
the trained neural sampler is very efficient when compared with MCMC samplers because neural samplers do not need sequential iterations which is the efficiency bottleneck of sampling for many applications. **The neural samplers are able to generate large amounts of samples within little time. This makes them favored for many applications**. Let’s take the Bayesian inference as an example. When inference, the user usually needs to use an MCMC sampler, e.g. SGLD, to generate a large number of samples to compute the predictive result. For each MCMC iteration, the whole dataset (or stochastically selected batches) are reused repeatedly, which is expensive and lacks efficiency. Instead, the user can pre-train the neural samplers on the unnormalized posterior target, and then use it for fast sampling when inference without accessing the data. This is far more efficient than re-running MCMC chains for each time of inference.
**Scalability**:
the EBM experiment in the main text highlights the scalability of KL-NS on high-dimensional targets, such as image space distribution. There are two aspects of scalability for KL-NS:
(1) When comparing with MCMC samplers, the KL-NS achieves 100+ times faster than baseline annealed Langevin dynamics with comparable performance (in terms of FID and KID). This acceleration is due to the use of neural networks to construct the samplers. This result demonstrates the scalability of KL-NS to high-dimensional targets and potentially has a high impact when considering more large-scale target distributions such as pre-trained large-scale EBM or diffusion models. We are willing to further explore the applications of KL-NS on larger-scale targets in the future.
(2) When compared with other neural samplers, the KL-NS shows stable training and stronger performance than competitor neural samplers. For instance, though the Fisher-NS and KSD-NS work for toy targets, they are empirically not scalable to large targets (we are willing to provide code to verify the claim). The SteinGAN shows some scaling ability, but it suffers from mode-seeking behavior and the kernel function is hard to tune in practice. Usually, directly using kernels in data space does not work well in practice. An advanced method is to first pre-train a feature extractor and then construct the kernel in the feature space. However, this method is difficult to tune because it involves pre-training a **suitable** feature extractor. The performance changes dramatically with different feature extractors. In conclusion, the SteinGAN is considered difficult to scale for high-dimensional targets with complex tuning. | Summary: Efficient sampling from unnormalized target distribution is of crucial interest in Bayesian inference. Classic methods such as Markov Chains Monte Carlo sampler provide unbiased samples but could be computationally expensive. This paper proposes a novel approach called neural implicit sampler to employ a neural transformation and leverage generative models to sample from target distribution. The authors have developed two innovative inference methods: the KL training method, which minimizes the Kullback-Leibler divergence and the Fisher Training method, which minimizes the Fisher divergence. The methods are evaluated on three benchmark cases including 2D target sampling, Bayesian inference and high-dimensional energy-based models. Using metrics Frechet Inception Distance, the proposed method is over 100 times more efficient than EBM's sampler.
Strengths: This paper has derived two novel training approaches for neural implicit sampler via KL divergence training and Fisher divergence training. The authors have also provided a theoretical analysis of the connection between their proposed Fisher training method and Fisher-Stein's sampler proposed in Hu. The authors also have evaluated their methods via three empirical benchmark studies and it is impressive that the proposed sampler can produce samplers with similar quality while being 100 times more efficient, which demonstrates the efficiency of the proposed method.
Weaknesses: I do not find clear weakness of the paper. I just have some minor comments:
1. In Table 1, the authors use HMC as a competing method. Since HMC's performance is very sensitive to the two tuning parameters epsilon and L, have the authors picked these two parameters to optimize the performance of HMC?
2. Can the authors make their code repository publicly open if the paper is accepted?
3. In [1] , a metric KID similar to FID is proposed, which shows to converge quickly to its presumed true value than FID. Can the authors evaluate their method using KID as well to see if the conclusions are consistent with FID?
[1] Bińkowski, Mikołaj, Danica J. Sutherland, Michael Arbel, and Arthur Gretton. "Demystifying mmd gans", 2018 ICLR.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. In Table 1, the authors use HMC as a competing method. Since HMC's performance is very sensitive to the two tuning parameters epsilon and L, have the authors picked these two parameters to optimize the performance of HMC?
2. Can the authors make their code repository publicly open if the paper is accepted?
3. In [1] , a metric KID similar to FID is proposed, which shows to converge quickly to its presumed true value than FID. Can the authors evaluate their method using KID as well to see if the conclusions are consistent with FID?
[1] Bińkowski, Mikołaj, Danica J. Sutherland, Michael Arbel, and Arthur Gretton. "Demystifying mmd gans", 2018 ICLR.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Currently, the sampling approach is only limited for sampling problems. It is interesting to see if the method can be applied for generative modeling or image translation problems.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your useful feedback. We will address your concerns one by one in the following paragraphs.
**A1**. In the experiment of Section 4.1, we optimize the HMC to get the step size and LeapFrog iterations. Besides, in the rebuttal period, we conduct a new experiment on 6 more 2D targets. In this experiment, we compare our proposed KL-NS and Fisher-NS with 2 one-shot neural samplers, the KSD-NS and SteinGAN, 1 explicit neural sampler, the RealNVP coupling flow, and 3 MCMC samplers, the SVGD, LD, and the HMC. We put the results and analysis in **Table 1** in the global author rebuttal cell.
The result shows that when compared with MCMC samplers, our KL-NS performs comparable with (slightly better than) LD and HMC samplers, while the sampling speed is over 500 times faster than MCMC samplers (1 versus 500 iterations). Compared with other one-shot neural samplers, the KL-NS performs significantly the best across all targets. This shows that the KL-NS is a strong one-shot neural sampler with high efficiency. Besides, we are willing to compare more neural-enhanced MCMC samplers to get a more thorough understanding of our one-shot neural samplers.
**A2**. We promise to release our code if the paper is accepted.
**A3**. Thank you for the useful suggestion. We will add the comparison of KID and FID values in the revision.
Thank you for your valuable comments.
---
Rebuttal Comment 1.1:
Title: KID performs is valid for KL-Sampler
Comment: **A3**. Thank you for your useful suggestions. We revisit the KID proposed in [1] and calculate the KID value with the same pre-trained classifier and generated samples as we use for FID values. We are surprised to find that the KID value of our KL sampler even performs better than multi-step EBM samplers. We re-organize and summarize the KID values in **Table 3** and will put it in the revision as you suggested.
| Model | NFE | FLOPs | FID | KID |
| :------: | :------: | :------: | :------: |:------: |
| EBM | 250 | 0.58Gx250 | 20.95 | 0.0097 |
| EBM | 200 | 0.58Gx200 | **20.92** | 0.0111 |
| EBM | 150 | 0.58Gx150 | 21.31 | 0.0169 |
| EBM | 100 | 0.58Gx100 | 30.35 | 0.0742 |
| EBM | 50 | 0.58Gx50 | 52.55 | 0.2061 |
| KL Sampler | **1** | **1.11G** | 22.29 | **0.0045**|
However, both FID and KID are not perfect, and can only reflect a certain aspect of the sample quality, so it is better to consider both FID and KID for a comprehensive understanding. We thank you for your constructive suggestion which really helps to improve our work.
[1] Bińkowski, Mikołaj, Danica J. Sutherland, Michael Arbel, and Arthur Gretton. "Demystifying MMD Gans", 2018 ICLR. | Summary: This submission proposes two training algorithms for implicit samplers, which are based on KL divergence and Fisher divergence, respectively. Tractable objective estimator are derived and practical training algorithms are demonstrated. Numerical experiments are conduct in several different settings.
Strengths: The writing is clear for most of the parts. The idea of implicit sampler has been studied before but has not been studied under the perspective of this article, especially with Fisher divergence. The derivation is sound mathematically. The experiments are done in a proper way.
Weaknesses: The reviewer has the following issues.
For the second part of gradient estimator of Fisher divergence, can we directly use
${sg}\[s_\theta(g_\theta(z) - s_q(g_\theta(z))\]^{T}s_\theta(x)$? Here "sg" means stop_gradient.
There are a bunch of missing baselines and benchmarks (at least citations). To name a few:
- PIS http://arxiv.org/abs/2111.15141
- AFT http://arxiv.org/abs/2102.07501
- CRAFT http://arxiv.org/abs/2201.13117
- DDS http://arxiv.org/abs/2302.13834
- GFlowNet https://arxiv.org/abs/2301.12594
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: See above.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 3 good
Contribution: 3 good
Limitations: See above.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your useful feedback. We will address your concerns one by one in the following paragraphs.
**A1**. Thank you for your reminder. The notation $sg$ does mean the stop-gradient. We feel sorry for the confusion and will refine the notations in the revision.
**A2**. We agree that there are many existing methods that also use neural networks for sampling. We will incorporate a discussion of them in the revision. To better explore the pros and cons of our proposed neural samplers, we conduct a new experiment on 2D target distributions to compare our proposed KL-NS and Fisher-NS with 2 one-shot neural samplers, the KSD-NS and SteinGAN, 1 explicit neural sampler, the RealNVP coupling flow [3], and 3 MCMC samplers, the SVGD, LD and the HMC. We put the results and analysis in **Table 1** in the global author rebuttal cell.
The result shows that when compared with MCMC samplers, our KL-NS performs comparable with (slightly better than) LD and HMC samplers, while the sampling speed is over 500 times faster than MCMC samplers (1 versus 500 iterations). Compared with other one-shot neural samplers, the KL-NS performs significantly the best across all targets. This shows that the KL-NS is a strong one-shot neural sampler with high efficiency. Besides, we are willing to compare more neural-enhanced MCMC samplers to get a more thorough understanding of our one-shot neural samplers.
Thank you for your valuable comments. We hope our answers have resolved your concerns, and if you still have any concerns, please do let us know.
---
Rebuttal Comment 1.1:
Comment: Thanks for the rebuttal. I decide to keep my score. | Summary: The paper is the area of approximate inference. The goal is to use approximating distributions where the density is unknown but samples can be drawn. The advantage of this approximating class is that it is potentially more expressive and easier to work with ones where the density is known/tractable. The paper considers approximating distributions based on two distortion measures - namely the Fisher Divergence and the reverse KL divergence. In both cases the gradient updates are written in terms of the score of the approximating distribution. This unknown score is then approximated using score matching using samples from the distribution. An equivalence is derived with the Fisher Stein Sampler of Hu et al. for one variant of the algorithm under optimality assumptions. Experiments are carried out including a comparison to EBM inference on MNIST.
Strengths: I think the basic idea is interesting. Certainly it is true that implicit distributions offer some advantages over more tractable approximating classes if one can deal with the resulting challenges and although there is work in this area (see below) it is less explored than for example normalizing flows. Also I think there is room for improvement on the existing approaches and with more work this approach might provide such an improvement.
Whilst it is not a big jump in methodology conceptually I’m not aware of this exact approach in the prior literature and the smaller details can make a big difference in terms of improved performance.
I think equivalence described with the Fisher Stein sampler and the Fisher divergence variant of the proposed method is interesting. I also think it is not a dead end in terms of the potential of this proposed algorithm variant. Since the equivalence relies on idealized optimality of the witness function the two methods may well differ in practice and this one might be better. So I would encourage the authors to explore this further.
There are quite a few baselines compared to in the Bayesian logistic regression section although the details are somewhat sparse.
Weaknesses: **Unfortunately, the literature review is missing quite a few relevant references.** The papers are significant enough to the proposed method that this is a significant limitation of the submission as it stands. Here are is a non-exhaustive list of examples:
*Unbiased Implicit Variational Inference. Titsias and Ruiz. AISTATS 2019.*
*Semi-implicit variational inference. Yin and Zhou. ICML 2018.*
*Semi-Implicit Variational Inference via Score Matching. Yu and Zhang. ICLR 2023.*
This group of papers is on the topic of variational inference with implicit distributions. The methods are somewhat distinct from the proposed submission but they are close that they should be discussed in the text and possibly compared to.
*Operator Variational Inference. Ranganath, Tran and Blei. NeurIPS 2016.* This highly cited work from NeurIPS is significantly earlier than some of the references in the submitted text. The paper uses Stein divergences and highlights the benefit of using implicit distributions.
**Existence of approximating density** One theoretical concern is as to whether the implicit approximating distributions actually have a density at all. For example the experiment in Section 4.3 the dimension of the inputs random variables is 128 but the output space is of dimension 32 x 32. So the samples will fall on a lower dimensional manifold of the output space and no have density with respect to the full Lebesgue measure. Note that this is a different issue from the density existing but being unknown/intractable. I think with thought and effort this issue could probably be tamed but it should be considered and discussed in a paper with this topic.
**The submitted experiments are not entirely reproducible.** Take for example the toy experiment in Section 4.1. No details are given for the baselines in either the main text or the supplement. Hamiltonian Monte Carlo is a gold standard asymptotically exact method which should work well for unimodal distributions in low dimensions. It is therefore not credible for the banana experiment that the proposed algorithm could be better unless the computational budget is constrained or the algorithm is mistuned. No details are given of the tuning or computational budget. Similarly coupling flows are in principle capable of modelling simple low dimensional distributions. There are many possible reasons why they did not in this case but since there are few details given it is not possible to know why. For Section 4.2 the details of the baselines are lacking. The tuning of these algorithms is challenging and can substantially affect their performance. Submitting the code would have helped a lot here.
**The experiments do not cover all relevant questions.** In Section 4.3 whilst I can see that the sampling is faster than annealed Langevin dynamics there are several methods that would plausibly be able to learn a one shot sampler and these are not compared to.
Whilst I acknowledge some discussion of the computational bottlenecks of the method, I would have liked more. For example the training time in the EBM method must be large and is not included in the comparison to the annealed Langevin dynamics.
Since learning point estimates of parameters along side inference is a big advantage of variational approaches I would have liked to see an example of this. For example, learning the parameters of the EBM rather than just using a pretrained one.
**Smaller points:**
Equation following Line 146: The result you are proving here is that the expectation of the score is zero. This is a very well known result. For instance it often comes up when discussing the Fisher Information matrix. So you could spend less of your precious space in the main text proving it.
Line 305: "Besides, now the sampling is only limited for sampling problems" The meaning was unclear for this sentence. This has not affected my review score.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: I have no questions.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 2 fair
Contribution: 2 fair
Limitations: I see no negative societal impact.
There is some discussion of limitations but I already mentioned some things I would have liked to see more discussion of. Please see the weaknesses section above.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your helpful feedback, we will address your concerns one by one.
**Q1**. missing quite a few relevant references.
**A1**. Thank you for the reminder. We are sorry for the loss of discussion of VI methods with implicit distributions. We agree that these VI methods have strong relations to neural samplers, and have significant contributions to research of implicit distributions. We will incorporate the discussion of them in the revision.
**Q2**. Existence of approximating density
**A2**. We agree that the intrinsic low-dimensional manifold for an implicit distribution is a common issue when using implicit distributions that have different input and output dimensions. However, in practice, the low-dimensional manifold issue could be overcome by adding slight noise to samples from implicit distributions. Overall, we thank your reminder and will incorporate a discussion on such an issue in the revision.
**Q3**. Experiments are not entirely reproducible. Details of Bayesian inference experiment.
**A3**. Thank you for your suggestions. In order to make a stronger comparison, we refs the open-source implementation of [4] and run a new comparison experiment on 2D targets. In this experiment, we compare our KL-NS and Fisher-NS with 3 MCM baselines: SVGD, LD, and HMC; 1 explicit baseline: coupling flow; and 2 implicit samplers: KSD-NS and SteinGan. Due to the limitation of words, we put the results and details of the new experiment in **Table 1** in the global author rebuttal cell part. Below we (1) analyze the results of the new experiment to compare our sampler with both MCMCs and other neural samplers and (2) give details on Bayesian inference experiments in Section 4.2.
(1) The result shows that when compared with MCMC samplers, our KL-NS performs comparable with (slightly better than) LD and HMC samplers, while the sampling speed is over 500 times faster than MCMC samplers (1 versus 500 iterations). Compared with other one-shot neural samplers, the KL-NS performs significantly the best across all targets. This shows that the KL-NS is a strong one-shot neural sampler with high efficiency. Besides, we are willing to compare more neural-enhanced MCMC samplers to get a more thorough understanding of our one-shot neural samplers.
(2) Details on Bayesian inference: In section 4.2, we use the same settings for Bayesian inference as in [1]. The neural samplers are implemented via a 4-layer MLP with 1024 hidden units in each layer and GELU activation functions. The output dimension of the sampler is 55 and the input dimension is set to be 55x10 = 550, following the same setting as in [1]. The score network has the same neural architecture as the sampler, but the input dimension is set to 55 which matches the output dimension. We use Adam optimizers with a learning rate of 0.0002 and default beta values for both sampler and score networks. We use a batch size of 100 for training the sampler and use 2 updates for score network for each update of the sampler network. We use the standard score matching for learning the score network. We train the sampler for 10k iterations for each repeat and use 30 independent repeats to compute the mean and std of the test accuracy. The learning rate of SGLD is chosen to be $0.1/(t + 1)^{0.55}$ as suggested in [2], and the average of the last 100 points is used for evaluation. For DSVI, the learning rate is 1e − 07, and 100 iterations are used for each stage. For SVGD[3], we use RBF kernel with bandwidth h calculated by the "median trick" as in [3], and 100 particles are used for evaluation with step size being 0.05.
**Q4**. experiments do not cover all relevant questions
**A4**. To explore KL-NS and Fisher-NS more thoroughly, we tried to implement an experiment to compare KSD-NS and SteinGAN on EBM experiments in Section 4.3 of the main text. We adapt the open-source Tensorflow implementation of KSD-NS and use it to the MNIST image. For the implementation of SteinGAN, we adapt the open-source Theano implementation. We find that the Fisher-NS and KSD-NS do not converge for high-dimensional MNIST targets, so the methods are not scalable for high dimensions in practice. This limitation of scalability has also been discussed in [1]. Besides, we find that SteinGAN shows strong mode-collapse behavior, for which the generator collapse to certain modes of EBM distribution such as number "5" or number "1". We implement the SteinGAN with pixel-space RBM kernel, by viewing each image as a vector. Maybe a more complex "image space" kernel can resolve the mode-collapse issue, and we plan to continue to explore the comparison of SteinGAN and KL-NS.
**Q5**. The neural sampler requires additional training.
**A5**. We acknowledge that neural samplers' additional pre-training phase is a limitation. This issue is faced by all other neural samples. However, from another perspective, the computational costs for pre-training will be amortized to each inference of the neural sampler. So if the trained neural sampler is used for sampling infinitely times, the costs for its pre-training can be ignored.
**Q6**. Experiment for training EBM.
**A6**. The goal of this work is to train a neural sampler to learn to sample from target distribution, so we think that training an EBM from scratch is not proper for evaluating the proposed method, because multiple aspects such as EBM architecture and hyper-parameters could potentially influence the EBM's convergence.
**Q7-8**. Small points.
**A7-8**. Thanks for the suggestion, we consider re-organizing the representation in the revision.
Thank you for your valuable comments. We hope our answers have resolved your concerns, and if you still have any concerns, please do let us know.
[1] Stein Neural Samplers
[2] Bayesian Learning via Stochastic Gradient Langevin Dynamics
[3] Stein Variational Gradient Descent: A General Purpose Bayesian Inference Algorithm
[4] Coin Sampling
---
Rebuttal Comment 1.1:
Title: Is it fair to use iterations to standardize the comparisons?
Comment: Thanks for all the effort you have put into the rebuttal. My advice would be to try and get more experiments ready for the submission in future. OpenReview is not a good forum for sharing this complexity of results - for instance you can't share figures on here. Also the rebuttal period is short.
In terms of the experiments. I note that the way you make your experiments fair is typically to take the same number of iterations. Even if we set aside training time, it seems plausible that the iterations have very different compute times for the different types of method particularly between the classical MCMC methods (which typically evaluate the log density) and the implicit sampler methods which typically evaluate a whole neural network. In particular, given that your method is O(n^2) it seems that using iterations might benefit your submission quite a bit.
Is this fair? Can you comment on the compute more on the actual compute times, please? Thanks.
---
Reply to Comment 1.1.1:
Title: Comparison of inference time. (part 1)
Comment: Thank you for your reply.
First, we respectfully disagree with your claim that our method is $\mathcal{O}(n^2)$. Let n be the particle numbers. The **neural sampler’s computational cost is at most** $\mathcal{O}(n)$ because it only requires a single pass of the sampler’s neural network to generate a batch of samples and each pass of the neural network has the same cost. Besides, the forward pass of neural networks can be paralleled on computing devices such as GPUs, so the computational cost of neural samplers can be further reduced.
Next, let's make some clarifications. by our understanding, we assume that you misunderstand that neural samplers require multiple iterations to generate samples. However, the neural sampler **only requires a single forward pass of the neural network to get a batch of samples**, while the MCMC samplers do require multiple (hundreds of) iterations to get a batch of samples. This means for each target distribution, the neural sampler only needs to evaluate the neural network one time. While the MCMC samplers require (at least) 500 iterations to converge. So we think it is fair to compare a 1-iteration neural sampler with 500-iteration MCMC samplers in order to measure their efficiency because they produce comparable sample qualities.
Below we will address your concerns about the computational costs of neural samplers and MCMC samplers. Before that, let us briefly introduce two kinds of target distributions. Let $p(x)$ denoted the target distribution.
(1) For a relatively simple target, both the potential function $\log p(x)$ (up to an unknown normalizing constant) and the score function $s(x):= \nabla_x \log p(x)$ have the analytic expressions. Therefore, the cost of computing the potential functions and score functions can be ignored. Then the computational bottleneck of the MCMC samplers comes from the iterations: how many times the particles are moved. We name this kind of target the "**analytic**" one.
(2) For some other targets, the potential function is defined through some neural networks, just like the case we are facing with learning to sample from EBM experiments. For these targets, the computation of potential function involves the forward pass of neural networks. And computing the score function involves the backward pass of neural networks. We name this kind of target the "**neural**" one.
We believe the best way to evaluate the efficiency of each sampler is to evaluate them under the same environment to measure the wall-clock inference time. In the below **Table 1**, we summarize the computational costs (wall-clock time) of SVGD, HMC, and LD together with our KL-NS neural sampler to compare the computational costs of each method. The Gaussian,..., and Squiggle are analytic targets while the EBM is a neural target. Each sampler can generate samples with comparable qualities (i.e. their KSD values are comparable). **Table 1** records the wall-clock inference time (seconds) for each sampler when generating 1k samples.
**Table 1**. Inference Time Comparison of MCMC Samplers and Neural Samplers (seconds)
The 2D experiment is conducted on an 8-CPU cluster with PyTorch of 1.8.1, while the EBM experiment is on 1 Nvidia Titan RTX GPU with PyTorch 1.8.1
stepsize=0.01, iterations=500, num particles=1000, num repeats=100
| Sampler | Gaussian | MOG2 | Rosenbrock | Donut | Funnel | Squiggle | EBM |
| :------: | :------: | :------: | :------: | :------: | :------: | :------: |:------: |
| SVGD(500) | 26.4224 $\pm$ 0.6000 | 26.7897 $\pm$ 0.5712 | 26.1088 $\pm$ 0.4666 | 26.3632 $\pm$ 0.4290 | 26.0555 $\pm$ 0.4232 | 26.0445 $\pm$ 0.3047 | |
| LD(500) | 0.1035 $\pm$ 0.0003 | 0.2168 $\pm$ 0.0008 | 0.1284 $\pm$ 0.0002 | 0.1100 $\pm$ 0.0003 | 0.1339 $\pm$ 0.0032 | 0.1319 $\pm$ 0.0006 | 2.6459 (50 NFE, 64 samples) |
| HMC(500) | 0.3438 $\pm$ 0.0017 | 1.7125 $\pm$ 0.0154 | 0.6725 $\pm$ 0.0044 | 0.3854 $\pm$ 0.0015 | 0.6837 $\pm$ 0.0018 | 0.7250 $\pm$ 0.0017 | |
| **KL-NS** | **0.0014** $\pm$ 0.0000 | **0.0014** $\pm$ 0.0000 | **0.0014** $\pm$ 0.0000 | **0.0014** $\pm$ 0.0000 | **0.0014** $\pm$ 0.0000 | **0.0014** $\pm$ 0.0000 | **0.0642** (1 NFE, 64 samples) |
---
Rebuttal 2:
Title: Thank you for your valuable questions, does our rebuttal resolve your concerns?
Comment: Dear Reviewer, We would greatly appreciate it if you could let us know whether our rebuttal has answered your questions.
---
Rebuttal Comment 2.1:
Title: Reply to authors
Comment: Hello. Thanks for your reply. It is going to take a bit of time to go through what you have added - really the large amount of additional information you have added in your rebuttal should have been in the original submission. | Rebuttal 1:
Rebuttal: Thank all reviewers for your valuable feedback. In the rebuttal period, we run a new comparison experiment on 2D targets with a reference of [1], and report the results in **Table 1**.
In this new experiment, we compare our neural samplers with 3 MCM baselines: SVGD[2], LD, and HMC; 1 explicit baseline: coupling flow; and 2 implicit samplers: KSD-NS[3] and SteinGan[4]. All implicit samplers have the same neural architectures, i.e. four layer MLP with 400 hidden units at each layer and ELU activation functions for sampler and score network (if necessary). We evaluate the KSD with IMQ kernel (implemented by an open-source package sgmcmcjax) on all target distributions as the performance metric reported in **Table 1**.
**Table1**. KSD Comparison of Samplers
stepsize=0.01, iterations=500, num particles=500, num chains=20
| Sampler | Gaussian | MOG2 | Rosenbrock | Donut | Funnel | Squiggle |
| :------: | :------: | :------: | :------: | :------: | :------: | :------: |
| MCMC Samplers | | | | | | |
| svgd(500) | 0.013 $\pm$ 0.001 | 0.044 $\pm$ 0.006 | 0.053 $\pm$ 0.002 | 0.057 $\pm$ 0.004 | 0.052 $\pm$ 0.001 | 0.024 $\pm$ 0.002 |
| ld(500) | 0.107 $\pm$ 0.025 | 0.099 $\pm$ 0.008 | 0.152 $\pm$ 0.030 | 0.107 $\pm$ 0.020 | 0.116 $\pm$ 0.029 | 0.139 $\pm$ 0.030 |
| hmc(500) | 0.094 $\pm$ 0.020 | 0.106 $\pm$ 0.020 | 0.134 $\pm$ 0.034 | 0.113 $\pm$ 0.020 | 0.135 $\pm$ 0.010 | 0.135 $\pm$ 0.033 |
| Neural Samplers | | | | | | |
| Coup Flow | 0.102 $\pm$ 0.028 | 0.158 $\pm$ 0.019 | 0.150 $\pm$ 0.026 | 0.239 $\pm$ 0.013 | 0.269 $\pm$ 0.019 | 0.130 $\pm$ 0.026 |
| KSD-NS | 0.206 $\pm$ 0.043 | 1.129 $\pm$ 0.197 | 1.531 $\pm$ 0.058 | 0.341 $\pm$ 0.039 | 0.396 $\pm$ 0.221 | 0.462 $\pm$ 0.065 |
| SteinGAN | 0.091 $\pm$ 0.013 | 0.131 $\pm$ 0.011 | 0.121 $\pm$ 0.022 | 0.104 $\pm$ 0.013 | 0.129 $\pm$ 0.020 | 0.124 $\pm$ 0.018 |
| **Fisher-NS(ours)** | 0.095 $\pm$ 0.016 | 0.118 $\pm$ 0.013 | 0.157 $\pm$ 0.030 | 0.179 $\pm$ 0.028 | 7.837 $\pm$ 1.614 | 0.202 $\pm$ 0.037 |
| **KL-NS(ours)** | 0.099 $\pm$ 0.015 | 0.104 $\pm$ 0.015 | 0.123 $\pm$ 0.021 | 0.109 $\pm$ 0.015 | 0.115 $\pm$ 0.012 | 0.118 $\pm$ 0.024 |
**Settings**:
For all MCMC samplers, we set the number of iterations to 500, which we find is enough for convergence. For SVGD and LD, we set the sampling step size to 0.01. For the HMC sampler, we optimize and find the step size to be 0.1, and LeapFrog updates to 10 work the best. For Coupling Flow, we follow [3], and use 3 invertible blocks, with each block containing a 4-layer MLP with 200 hidden units and Gaussian Error Linear Units (GELU) activations. The total parameters of flow are significantly larger than neural samplers. We find that adding more coupling blocks does not lead to better performance. For all targets, we train each neural sampler with the Adam optimizer with the same learning rate of 2e-5 and default bete. We use the same batch size of 5000 for 10k iterations when training all neural samplers. We evaluate the KSD for every 200 iterations with 500 samples with 20 repeats for each time. We pick the lowest mean KSD among 10k training iterations as our final results. Since our proposed Fisher neural sampler and KL neural sampler requires learning the score network, we find that using 5-step updates of the score network for each update of the neural sampler works well.
**Analysis**:
The SVGD performs significantly the best among all samplers. However, the SVGD has a more heavy computational cost ($\mathcal{O}(n^2)$) when the number of particles grows because its update requires matrix computation for a large matrix. The LD and HMC perform almost the same. The KL-NS performs the best across almost all targets, slightly better than LD and HMC on each target. The SteinGAN performs second and is closely comparable to KL-NS. In theory, both the KL-NS and the SteinGAN aim to minimize the KL divergence between the sampler and the target distribution in different ways, so we are not surprised by their similar performances. The Coupling Flow performs overall third, but it fails to correctly capture the Rosenbrock target. We believe more powerful flows, such as stochastic variants or flows with more complex blocks will lead to better performance, but these enhancements will inevitably bring in more computational complexity. The Fisher-NS performs the fourth, with 1 failure case on the Funnel target. We find that the KSD-NS is hard to tune in practice, which has two failure cases. Besides, the KSD-NS has a relatively high computational cost because it requires differentiation through the empirical KSD, which needs large matrix computation when the batch size is large. Overall, the one-shot KL-NS has shown strong performance, outperforming LD and HMC with requires multiple iterations.
[1] Coin Sampling: Gradient-Based Bayesian Inference without Learning Rates
[2] Stein Variational Gradient Descent: A General Purpose Bayesian Inference Algorithm
[3] Stein Neural Samplers
[4] Learning to Draw Samples: With Application to Amortized MLE for Generative Adversarial Learning | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Not All Out-of-Distribution Data Are Harmful to Open-Set Active Learning | Accept (poster) | Summary: This paper proposes an active learning approach for open-set learning. The proposed approach does not try to avoid sampling OOD data as some of the prior work did. Instead, it samples some of the OOD data intentionally to enhance the OOD detector.
Strengths: 1.The proposed idea is easy to implement and understand.
2.The proposed problem, open-set active learning, is meaningful to me.
Weaknesses: 1.The overall framework is trivial to me. It is a traditional active learning method adapted to an OOD detector and an ID learner.
2.The essential part of active learning, which is the design of the sampling criterion, is neither novel nor plausible to me. For the uncertainty weight, it is merely the prediction score of the model proposed by [27]. What is more, this score tends to approximate the likelihood of a data point being an in-distribution sample. But I do not think this score reveals the informativeness/ sample importance in active learning. Different data points can have very different contributions to learning, the proposed sampling score does not seem to have the ability to differentiate them. The meta-weight can be seen as a loss-based active sampling criterion for an OOD detector, which is trivial to me.
3.All the proposed methods are empirical, there is no quantitative analysis to support the effectiveness of the proposed methods.
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: None
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 1 poor
Limitations: None
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Q1: "...framework is trivial to me..."
A1: Open-set active learning aims to strategically select pure ID data and filter out OOD data, and thus a powerful OOD detector is important. The common practice of traditional open-set active learning methods, as shown in Figure 1 (b) in the manuscript, always relies on ID data to construct the OOD detector. Subsequently, these methods primarily focus on selecting pseudo-ID instances, which would lead to the weakening of the OOD detector due to the lack of active selection of effective OOD data and affects the ID purity during the selection process. To address this challenge, we propose a simple yet effective sampling scheme that progressively selects pseudo-ID and pseudo-OOD instances in each round, especially adding valuable OOD data in the initial round. Thereby, it could enhance the capacity of the OOD detector and simultaneously promote the ID classifier by increasing ID purity. Besides, we also provide a theoretical analysis to support why the proposed PAL is better than traditional AL. Specifically, the theoretical results show that the proposed PAL has a better generalization error bound than traditional AL, showing the effectiveness of the proposed PAL. Please refer to **Global Response** for more details about the theoretical analysis.
Q2: "... the uncertainty weight, ... the meta-weight..."
A2: Sorry for the confusion. We represent the informativeness of samples by measuring the uncertainty of the most confident ID class. This is accomplished by utilizing the prediction of the OVA classifier. Besides, considering the divergent distributions of ID and OOD data inspired by [Ref 3, Ref 4], we utilize meta-weight to measure the representativeness of each sample by assessing its similarity to the established data distribution. Instances with higher meta-weights are deemed to be more representative as they match better with the existing data distribution. Besides, following the comments of Reviewer oop5, we replaced the meta-weight with Coreset (we denote the new method by Core_PAL) to validate the effectiveness of the meta-weight by conducting experiments on CIFAR-10 and CIFAR-100 datasets. The results are listed in the following table and show that the Core_PAL performs worse than PAL, revealing the effectiveness of meta-weight in selecting instances.
Table 2: Comparison of classification accuracy (%) for Core_PAL and PAL on CIFAR-10 and CIFAR-100 with an ID proportion of 20%.
| | CIFAR-10 | | | | | CIFAR-100 | | | | |
| :------: | :------: | :---: | :---: | :---: | :---: | :-------: | :---: | :---: | :---: | :---: |
| Round | 1 | 3 | 5 | 7 | 9 | 1 | 3 | 5 | 7 | 9 |
| Core_PAL | 87.09 | 93.29 | 96.74 | 97.44 | 98.29 | 44.79 | 49.75 | 59.59 | 63.74 | 65.99 |
| PAL | **91.14** | **95.59** | **97.59** | **98.49** | **98.74** | **45.65** | **53.04** | **60.04** | **65.64** | **69.39** |
[Ref 3] Hugo Larochelle et al., OOD-MAML: Meta-Learning for Few-Shot Out-of-Distribution Detection and Classification, NeurIPS2020
[Ref 4] Cesar Almecija et al., Uncertainty-Aware Meta-Learning for Multimodal Task Distributions, ICLR2023
Q3: "...proposed methods are empirical, there is no quantitative analysis..."
A3: In fact, we don't fully understand this question. We will try our best to answer this question, and any additional discussion is welcome. First, we have provided a theoretical analysis (please refer to **Global Response** for details) to show the effectiveness of the proposed PAL. Additionally, we have performed a repeatability experiment in Appendix B. As shown in Figure 10 in the supplementary material, the central line represents the mean value, while the upper and lower limits correspond to its standard deviations for each color. The results demonstrate that our PAL method exhibits consistently high stability.
---
Rebuttal Comment 1.1:
Title: Thank you for the response
Comment: Thank you for the response.
The rebuttal addresses most of my concerns. I have raised my score accordingly.
---
Reply to Comment 1.1.1:
Comment: Thank you for your helpful comments and feedback. Please let us know if there are further confusions/questions. We are happy to clarify and try to address them. | Summary: This manuscript studies open-set active learning and points out that concentrating solely on selecting pseudo-ID instances may cause the training imbalance of the ID classifier and OOD detector. To address this issue, this manuscript proposes a simple yet effective sampling scheme, dubbed Progressive Active Learning (PAL). Extensive experiments on various open-set AL scenarios demonstrate the effectiveness of PAL.
Strengths: 1.This manuscript points out that concentrating solely on selecting pseudo-ID instances may cause the training imbalance of the ID classifier and OOD detector and proposes Progressive Active Learning (PAL) to solve the problem.
2.Extensive experiments on various open-set AL scenarios demonstrate the effectiveness of PAL.
Weaknesses: 1.The manuscript lacks corresponding theoretical analysis.
2.The manuscript lacks an introduction to related work on open-set semi-supervised learning and safe semi-supervised learning, such as [1], [2], and [3].
3.In line 39-41, “the main challenges in open-set AL revolve around effectively selecting valuable ID instances for classifier training and distinguishing the OOD instances”, Similar viewpoints have already been proposed by [2].
4.The framework in Fig. 2 is confusing and the flow description is not clear enough. For example, how is f^c trained?
5.In every query, how are N^{OOD}_{query} and N^{ID}_{query} set up?
6.The description of the retraining process is not clear. What is the final loss? Is f^c used for the final testing?
7.In line 136, entropy is used for obtaining s^{ID}. Has the author tried other metrics such as MSP in [4] or energy in [5]?
8.The manuscript lacks important baselines, such as [1], [2], [3]. The manuscript needs to be compared with open-set semi-supervised learning and safe semi-supervised learning methods.
9.The manuscript mentions the need for multiple rounds of query and training, so the runtime of the manuscript needs to be compared with the baseline.
[1] Safe deep semi-supervised learning for unseen-class unlabeled data.
[2] Safe-Student for Safe Deep Semi-Supervised Learning with Unseen-Class Unlabeled Data.
[3] Openmatch: Open-set semi-supervised learning with open-set consistency regularization.
[4] A Baseline for Detecting Misclassified and Out-of-Distribution Examples in Neural Networks.
[5] Energy-based out-of-distribution detection.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: How is N^{ID}_{query} selected? Is the sample with maximum of un(x^u_j) chosen into N^{ID}_{query}? The sample with maximum of un(x^u_j) indicates strong certainty in these samples. Perhaps using model-generated pseudo-labels would be accurate enough, without the need to waste resources on querying these relatively certain samples.
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: Please see Weaknesses.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Due to the space limit, we put all tables in the **attached one-page PDF**.
Q1: "...lacks corresponding theoretical analysis..."
A1: We have provided a theoretical analysis in the **Global Rebuttal** to support the proposed PAL method. We theoretically show that PAL has a better generalization error bound than the traditional AL method, which means PAL has better generalization ability than traditional AL. We will include more comprehensive theoretical details in the revision.
Q2: "... lacks an introduction to related work on open-set semi-supervised learning and safe..."
A2: Although both open-set/safe semi-supervised learning and open-set active learning aim to construct a more effective ID classifier by reasonably utilizing unlabeled data, the essential difference lies in whether or not manual participation is necessary. Open-set/Safe semi-supervised learning aims to use all the pseudo-ID instances in unlabeled data according to the model's prediction, whereas open-set active learning aims to select the most effective pseudo-ID instances in unlabeled data for manual labeling to retrain the model without using all the unlabeled data. We will provide more discussion in the Related Work section.
Q3: "...Similar viewpoints have already been..."
A3: Open-set active learning aims to strategically select pure ID data and filter out OOD data, and thus a powerful OOD detector is important. The common practice of traditional open-set active learning methods, as shown in Figure 1 (b) in the manuscript, always relies on ID data to construct the OOD detector and then concentrates solely on select pseudo-ID instances, which would lead to the weakening of the OOD detector due to the lack of active selection of effective OOD data and affects the ID purity during the selection process. To address this challenge, we propose a simple yet effective sampling scheme that is progressively selecting pseudo-ID and pseudo-OOD instances in each round, especially adding valuable OOD data in the initial round. Thereby, it could enhance the capacity of the OOD detector and simultaneously promote the ID classifier by increasing the ID purity. Besides, we also provide a theoretical analysis to support why the proposed PAL is better than traditional AL. Specifically, the theoretical results show that the proposed PAL has a better generalization error bound than traditional AL, showing the effectiveness of the proposed PAL. Please refer to **Global Response** for more details about the theoretical analysis.
Q4: "... the flow description is not clear enough...how is $f^c$ trained? "
A4: Sorry for the confusion. Specifically, in the first round, the number of classes for detector $f^c$ is C, and we train $f^c$ with the initial $D_l$. In the subsequent rounds, the number of classes for detector $f^c$ is expanded to C+1 by actively adding the OOD data. In each round, we retrain the $f^c$ according to $l_{ova}$ loss with $D_{l_{\text{all}}}^t$. The algorithmic pseudocode is included in Appendix A.
Q5: "...$N_{query}^{OOD}$ and $N_{query}^{ID}$ set up..."
A5: Thanks. In our experiments, the label budget is set to be 1500 in each round following [Ref 1, Ref 2], i.e., $ |D_{query}^{t}| = N_{query}^{OOD_{t}} + N_{query}^{ID_{t}}=1500$. Specifically, in the first round, to enhance the effectiveness of OOD detection, we set $N_{query}^{ID} = 300$ and $N_{query}^{OOD} = 1200$. In the following rounds, to select more valuable ID instances, we set $N_{query}^{ID} = 1450$ and $N_{query}^{ID} = 50$. From Figure 4 in the manuscript, we can conclude that the query purity of PAL is superior to the comparison method, which reveals that we can autonomically select useful pseudo-OOD and pseudo-ID according to our assignment.
[Ref 1] Pan Du et al., Contrastive active learning under class distribution mismatch, ICCV2021
[Ref 2] Kun-Peng Ning et al., Active Learning for Open-set Annotation, CVPR2022
Q6: "...retraining process is not clear..."
A6: Sorry for the confusion. Actually, the final loss is described in line 174. The final target of open-set active learning is the ID classification, and thereby we can adopt the $f^o$. Besides, we also give the detection result for supplementary validation using $f^c$. We will modify the descriptions in the revision.
Q7: "...tried other metrics such as MSP in [4] or energy in [5]..." and "...lacks important baselines, such as [1], [2], [3]..."
A7: We have replaced $s^{ID}$ with the MSP/energy-based method and compared PAL with D3SL and OpenMatch. We conducted experiments on CIFAR-10 and CIFAR-100 with the ID proportion of 20%. The results are presented in **Table 3 and Table 4 of the attached one-page PDF** . Furthermore, by employing the official code of D3SL and conducting hyperparameter tuning with values such as meta_lr={6e-5, 3e-4, 2e-3}, un_batch_size={100, 128, 256}, and lr_weight={1e-3, 1e-2, 3e-4}, the results showed in Table 4 signify the most favorable outcomes attained. The results in Table 3 show that by using $s^{ID}$ metric, PAL achieves the best performances on CIFAR-10 and CIFAR-100 datasets, compared with PAL with MSP/energy. The results in Table 4 show that PAL outperforms D3SL and OpenMatch on CIFAR-10 and CIFAR-100 datasets.
Q8: "...the runtime of the manuscript needs to be compared with the baseline."
A8: We have included a comparison of running times for PAL and other baselines on CIFAR-10 and CIFAR-100 with an ID proportion of 20% in **Table 5 of attached one-page PDF** . The results show that PAL has the second fast runtime, which is faster than D3SL, OpenMatch and LfOSA, but is slower than Coreset.
Q9: "How is $N_{query}^{ID}$ selected..."
A9: Actually, the value of $un(x_{j}^u)$ represents the uncertainty, which comprehensively considers the instance's representativeness and informativeness. Therefore, we select the first b examples with the highest probability, as described in Line 155. We will give more details in the revision.
---
Rebuttal Comment 1.1:
Comment: Thanks for your rebuttal! After carefully reading the author's response, my concerns have been well addressed. Thus, I have decided to raise my overall score, and I tend to recommend acceptance of this paper.
---
Reply to Comment 1.1.1:
Comment: Thank you for your helpful comments and feedback. Please let us know if there are further confusions/questions. We are happy to clarify and try to address them. | Summary: This paper considers the open-set active learning problem, a sub-topic of active learning that focuses on non-iid settings. The authors constructed both an ID classifier and an OOD detector to implement open-set active learning. Specifically, they proposed a sampling scheme to ensure a balance of ID and OOD samples, which helps both the ID classifier and the OOD detector. The authors conducted a series of experiments, and the results of these experiments verified the effectiveness of the proposed method.
Strengths: 1. The considered problem is important and highly relevant to the machine learning community.
2. The proposal is simple and reasonable. The authors suggest that out-of-distribution (OOD) samples can also help the process of open-set active learning from the perspective of OOD detection. They use a simple and effective balanced sampling approach to alleviate the learning process differences and improve the performance of open-set active learning. This proposal suggests that actively annotating some OOD samples can enhance the OOD detector and help open-set active learning. This supports the claim of the title "Not All Out-of-Distribution Data Are Harmful" and extends the previous focus on mainly querying ID samples.
3. The experiments are sufficient, and the effectiveness of the proposal has been clearly verified.
Weaknesses: CCAL is also a well-known open-set active learning method, which should be discussed and compared in the experiments.
[CCAL] Contrastive active learning under class distribution mismatch. TPAMI’22
It is regretful that there is no relevant theoretical analysis to further support this work.
Miscellaneous:
The results in Table 1 are too dense. Consider adjusting the layout.
Also, consider using the same text font in Figures 3, 4, 5, 6, and 7 as in the main body.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: The proposal uses meta-learning to calculate meta-weights for obtaining representative metrics of the samples. Compared to previous methods such as clustering and coreset, does this complex method introduce too much computational overhead? Does it have any special advantages? In the experiment, can we obtain good results using traditional representative metric methods?
---
Thanks for the clarifications. Most of the concerns have been addressed. I will keep my score.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The authors have provided a discussion on the broader impact.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Q1: "CCAL is also a well-known open-set active learning method, which should..."
A1: We have compared the proposed PAL with CCAL on CIFAR-10 and CIFAR-100 with the ID proportion of 20%. The results in Table 1 reveal that PAL outperforms CCAL, for the reason that CCAL does not actively use OOD data like the existing open-set AL method, which may lead to bias in instances' similarity calculation for unlabeled data.
Table 1: Comparison of classification accuracy (%) for CCAL and PAL on CIFAR-10 and CIFAR-100 with an ID proportion of 20%.
| | CIFAR-10 | | | | | CIFAR-100 | | | | |
| :---: | :------: | :---: | :---: | :---: | :---: | :-------: | :---: | :---: | :---: | :---: |
| Round | 1 | 3 | 5 | 7 | 9 | 1 | 3 | 5 | 7 | 9 |
| CCAL | 88.90 | 94.35 | 96.20 | 97.55 | 97.70 | 45.10 | 50.90 | 53.40 | 57.20 | 60.45 |
| PAL | **91.14** | **95.59** | **97.59** | **98.49** | **98.74** | **45.65** | **53.04** | **60.04** | **65.64** | **69.39** |
Q2: "...no relevant theoretical analysis to further support..."
A2: As mentioned in the **Global Rebuttal**, we have provided a theoretical analysis to support the proposed PAL method. The generalization results reveal that PAL has a better generalization error bound than the traditional AL method, showing the effectiveness of the proposed PAL. We will provide more comprehensive details about the theoretical analysis in the revision.
Q3: "...results in Table 1 are too dense..."
A3: Thank you. We will modify the table and improve the readability in the revision.
Q4: "...does this complex method introduce too much computational overhead? Does it have any special advantages..."
A4: Actually, the core idea of active learning is to control the budget of manual annotations to maximize the advancements in model performance. Although the introduction of meta-weights increases training costs, the performance has been improved by selecting more effective instances. Moreover, we have replaced the meta-weight with Coreset to validate the effectiveness, and the PAL with Coreset method, i.e., Core_PAL. The results in Table 2 show that Core_PAL performs worse than PAL, revealing the effectiveness of meta-weight in selecting instances.
Table 2: Comparison of classification accuracy (%) for Core_PAL and PAL on CIFAR-10 and CIFAR-100 with an ID proportion of 20%.
| | CIFAR-10 | | | | | CIFAR-100 | | | | |
| :------: | :------: | :---: | :---: | :---: | :---: | :-------: | :---: | :---: | :---: | :---: |
| Round | 1 | 3 | 5 | 7 | 9 | 1 | 3 | 5 | 7 | 9 |
| Core_PAL | 87.09 | 93.29 | 96.74 | 97.44 | 98.29 | 44.79 | 49.75 | 59.59 | 63.74 | 65.99 |
| PAL | **91.14** | **95.59** | **97.59** | **98.49** | **98.74** | **45.65** | **53.04** | **60.04** | **65.64** | **69.39** |
---
Rebuttal Comment 1.1:
Comment: Thank you for the clarifications. I agree that the purpose of active learning is to improve performance. However, in practical applications, facing large-scale unlabeled data pools, high computational costs can severely limit the practicality of algorithms. Therefore, could you provide a computational cost analysis of the PAL method, especially the meta-weight learning process, as well as theoretical convergence and empirical convergence rounds in experiments? These will help readers understand the computational costs and practicality behind the algorithm.
---
Reply to Comment 1.1.1:
Comment: Thank you for your valuable comments. To assess the computational costs, we calculated the components related to uncertainty, meta-weight, and the total cost on CIFAR-100 with an ID proportion of 20% for each round of total 10 rounds. Our findings indicate that the cost remains relatively stable throughout each round, and our method exhibits the ability to converge. Take the first round as an example, the total running time is 108.96 min, in which the running time of meta-weight is 71.21 min (including the running time of 70.92 min for $S^{ID}$) and the running time of classification is 37.75 min. Additionally, our approach can converge within 200 epochs in most rounds. As evident from the data presented in Table 5 of the accompanying single-page PDF, the proposed PAL is faster than most approaches, except for Coreset. This is due to the utilization of the more computationally intensive meta-weight approach for measuring sample representativeness, which results in improved performance. With some assumptions such as smoothness and bounded variance for the loss function of interest, one could show that the proposed method can achieve the convergence rate of $O(1/\sqrt{T})$, where $T$ is the number of iterations. We then empirically find that the Pearson correlation coefficient between training loss and $1/\sqrt{T}$ is 0.9581, showing the training loss is highly linear correlated with $1/\sqrt{T}$, which matches the theoretical convergence rate. Thank you again for your feedback! We will add more experiments in the revised version. Please let us know if you have any additional questions. We are happy to provide additional experimental results. | Summary: In this paper, the authors aims to improve open-set active learning, where unlabelled set may contains some open-set instances. To do this, they propose a new sampling scheme, called progressive active learning. Specifically, they use a progressive sampling method to select valuable OOD data for the tradeoff between pseudo-ID and pseudo-OOD instances in each round. Experiments on various datasets are conducted to show the effectiveness of the proposed method.
Strengths: 1. This paper is well-writing and easy-to-understand. I believe readers can easily get the core idea of the proposed method.
2. The empirical performance is non-trivial. From Table 1, we can observe the improvement of the proposed method is significant.
3. The motivation is clear. I agree that OOD data should not be simply filtered out for active learning.
Weaknesses: 1. The method is somewhat heuristic and not novel enough. In my view, both OVA classifier and meta-weight are not original contribution of this paper. The technical novelty of the progressive sampling is not sufficient to support the acceptance in this conference.
2. The literature of leveraging OOD data to improve generalisation is missing, like OAT [1], ODNL [2] and Open-sampling [3]. The authors may need to discuss the relationship between the proposed method and these works.
[1] Lee, Saehyung, et al. "Removing Undesirable Feature Contributions Using Out-of-Distribution Data.", ICLR, 2021.
[2] Wei, et al. Open-set Label Noise Can Improve Robustness Against Inherent Label Noise. NeurIPS 2021
[3] Wei, et al. Open-Sampling: Exploring Out-of-Distribution data for Re-balancing Long-tailed datasets. ICML 2022.
---------
After reading the responses from the authors, my concerns have been well addressed, so I lean towards acceptance for this paper.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Do you mean the positive effect of OOD data is only in detecting OOD examples? You write "improving the ID purity in query sets" in line 66.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Q1: " ...The technical novelty of the progressive sampling is not sufficient..."
A1: Open-set active learning aims to strategically select pure ID data and filter out OOD data, necessitating the presence of a robust OOD detector. The common practice of traditional open-set active learning methods, as depicted in Figure 1 (b) in the manuscript, usually relies on detected ID data to build the OOD detector. Subsequently, these methods primarily focus on selecting pseudo-ID instances, which would lead to the weakening of the OOD detector due to the lack of active selection of effective OOD data and affects the ID purity during the selection process. To address this challenge, we propose a simple yet effective sampling scheme that progressively selects pseudo-ID and pseudo-OOD instances in each round, especially adding valuable OOD data in the initial round. Thereby, it could enhance the capacity of the OOD detector and simultaneously promote the ID classifier by increasing the ID purity. We will highlight the contributions in the revision.
On the theoretical side, we provide a theoretical analysis of generalization error, which is commonly used in the literature of learning theory. Our theoretical analysis reveals that the proposed PAL has a better generalization error bound than the standard AL without using detected OOD data, showing the effectiveness of the proposed PAL. Please refer to **Global Response** for more details about the theoretical analysis.
Q2: "... literature of leveraging OOD data to improve generalization is missing..."
A2: Thanks for your suggestion. Unlike OAT, ODNL, and Open-sampling, which directly utilize OOD data to improve the ID classifier, our approach focuses on enhancing the OOD detector using detected OOD data, thereby controlling the ID purity and improving the ID classifier. We will include more discussion in the revision.
Q3: "Do you mean the positive effect of OOD data is only in detecting OOD examples..."
A3: Sorry for the confusion. The positive effect of selected OOD data can not only promote the OOD detector building for better detecting OOD data in subsequent phases but also improve the ID purity. We will make it clear and give more details in the revision.
---
Rebuttal Comment 1.1:
Comment: Thank you for the detailed response. My concerns have been well addressed, so I lean towards acceptance for this paper.
---
Reply to Comment 1.1.1:
Comment: Thank you for your helpful comments and feedback. Please let us know if there are further confusions/questions. We are happy to clarify and try to address them. | Rebuttal 1:
Rebuttal: We sincerely thank the PC, SAC, ACs, and reviewers for handling and reviewing our paper. All constructive and valuable comments are helpful in further improving our paper.
Since most reviewers have mentioned the lack of theoretical analysis, we provided a generalization analysis. Our theoretical results reveal that the proposed PAL has a better generalization error bound than traditional AL, showing the effectiveness of the proposed PAL. We include the details as follows.
We also include additional experimental results in attached ***one-page PDF*** file.
======== Theoretical Analysis ========
To theoretically understand the use of detected OOD data in the OOD detector, we first give the following notations. Suppose that the samples $(\mathbf{x},y)\in(\mathcal{X},\mathcal{Y})$ follows a unknown distribution $\mathcal{P}$, where $\mathcal{X} = \mathbf{R}^d$ is input space and $\mathcal{Y} = \\{0,1\\}$ is output space. The label of ID data is $y=0$, and the label of OOD data is $y=1$. Let $\ell:\mathbf{R}\times\mathcal{Y}\longrightarrow \mathbf{R}^+$ be the loss of interest. With the input $\mathbf{x}$ and its corresponding label $y$, the expected loss is $\mathcal{L}(f) = \mathbb{E}_{\mathcal{P}}[\ell(f(\mathbf{x}),y)]$.
Suppose we have a training dataset $\\{(\mathbf{x_1},y_1), \dots, (\mathbf{x_n},y_n)\\}$ drawn from distribution $\mathcal{P}$, then the empirical loss is $\mathcal{\widehat L}(f) = \frac{1}{n}\sum_{i=1}^{n}\ell(f(\mathbf{x_i}),y_i)$. Let $C(\mathcal{F})$ be some proper complexity measure of the family of hypothesis class $\mathcal{F}$. We assume that the loss is Lipschitz with constant $L$ and the Rademacher complexity $\mathfrak{R}_n(\mathcal{F}) \le \sqrt{\frac{C(\mathcal{F})}{n}}$. To simplify the analysis, we assume the distribution of ID and OOD samples is balanced and the number of OOD samples is the same as the number of ID samples in the training set. We suppose that the number of detected ID data is $m$ while the number of detected OOD data is $n-m$ where $m<n$, and among the detected ID data, the number of real ID data is $m_0 := \alpha m$ while the number of real OOD data is $m_1 := (1-\alpha)m$ with $0<\alpha<1$.
**Theorem 1.** For a Lipschitz loss $\ell$ bounded by $c$, we have the following results with probability at least $1-\delta$ simultaneously.
For the proposed PAL method, the generalization error bound is
\begin{align}
\mathcal{L}(f_{PAL})- \mathcal{\widehat L}(f_{PAL}) \le 2L\sqrt{\frac{C(\mathcal{F})}{n}} + c\sqrt{\frac{\log(1/\delta)}{2n}} \le O\left(\frac{1}{\sqrt{n}}\right).
\end{align}
For the standard AL method, the generalization error bound is
\begin{align}
\nonumber \mathcal{L}(f_{AL})- \mathcal{\widehat L}(f_{AL}) \le & L\sqrt{\frac{C(\mathcal{F})}{\alpha m}} + L\sqrt{\frac{C(\mathcal{F})}{(1-\alpha)m}} + \frac{c}{2}\sqrt{\frac{\log(2/\delta)}{2\alpha m}} + \frac{c}{2}\sqrt{\frac{\log(2/\delta)}{2(1-\alpha)m}} \le O\left(\frac{1}{\sqrt{\alpha m}} + \frac{1}{\sqrt{(1-\alpha)m}}\right),
\end{align}
where $0<\alpha<1$ and $n<m$.
**Remark.** Since $0<\alpha<1$ and $n<m$, we know $\frac{1}{\sqrt{n}} \le \frac{1}{\sqrt{\alpha m}} + \frac{1}{\sqrt{(1-\alpha)m}} $, showing that the generalization error bound for PAL method is better than the bound for standard AL method. That is to say, the use of detected OOD data can improve the effectiveness of the OOD detector.
**Proof Sketch.** First, we proof the generalization error of the proposed PAL method. Following Theorem 1 of [R1], we have
\begin{align}
\nonumber \mathcal{L}(f_{PAL})- \mathcal{\widehat L}(f_{PAL}) \le 2L\mathfrak{R}_n(\mathcal{F})+ c\sqrt{\frac{\log(1/\delta)}{2n}},
\end{align}
where $\mathfrak{R}_n(\mathcal{F})$ is the Rademacher complexity of a function class $\mathcal{F}$ and $n$ is the number of training samples. Withe the Rademacher complexity $\mathfrak{R_n}(\mathcal{F}) \le \sqrt{\frac{C(\mathcal{F})}{n}}$, we give equation (1).
Next, we will proof the generalization error of the standard AL method. To this end, we denote by $\mathcal{P} = \mathcal{P}(\mathbf{x}|y=0)$ the conditional probability for ID data and $\mathcal{P_1} = \mathcal{P}(\mathbf{x}|y=1)$ the conditional probability for OOD data. Let $\mathcal{L_j}(f)$ be the loss from class $j \in \\{0,1\\}$: $\mathcal{L_j}(f) = \mathbb{E_{\mathcal{P_j}}}[\ell(f(\mathbf{x}),y)]$, and let $\mathcal{\widehat L_j}(f)$ be its corresponding empirical loss.
Then by applying the standard analysis for each class $j$ in Theorem 1 of [R1], with probability $1-\delta/2$ we have
\begin{align}
\nonumber \mathcal{L_j}(f_{AL})- \mathcal{\widehat L_j}(f_{AL}) \le 2L\mathfrak{R_{m_j}}(\mathcal{F})+ c\sqrt{\frac{\log(2/\delta)}{2m_j}}, (j = 0,1).
\end{align}
Since the distribution of ID and OOD samples is balanced, we have
$\mathcal{L}(f_{AL}) = \frac{1}{2} \mathcal{L_0}(f_{AL}) + \frac{1}{2} \mathcal{L_1}(f_{AL})$ due to the definitions of the loss functions. Similarly, due to the number of OOD samples is same as the number of ID samples in training set, we know $\mathcal{\widehat L}(f_{AL}) = \frac{1}{2}\mathcal{\widehat L_0}(f_{AL}) + \frac{1}{2}\mathcal{\widehat L_1}(f_{AL})$. Finally, by applying the union bound, we have
\begin{align}
\nonumber \mathcal{L}(f_{AL})- \mathcal{\widehat L}(f_{AL}) \le L\mathfrak{R_{m_0}}(\mathcal{F})+ + L\mathfrak{R_{m_1}}(\mathcal{F})
+\frac{c}{2}\sqrt{\frac{\log(2/\delta)}{2m_0}} + \frac{c}{2}\sqrt{\frac{\log(2/\delta)}{2m_1}}.
\end{align}
With the Rademacher complexity $\mathfrak{R}_{m_j}(\mathcal{F}) \le \sqrt{\frac{C(\mathcal{F})}{m_j}}$ for $j=0,1$, we complete the proof of equation (2) by plugging in $m_0 = \alpha m$ and $m_1=(1-\alpha)m$.
[R1] Sham M Kakade, Karthik Sridharan, and Ambuj Tewari. On the Complexity of Linear Prediction: Risk Bounds, Margin Bounds, and Regularization. In Advances in Neural Information Processing Systems, pages 793–800, 2009.
Pdf: /pdf/915dba3bc2910def9062ffe163c19b6043e2268f.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Unconstrained Dynamic Regret via Sparse Coding | Accept (poster) | Summary: This paper studies adaptive online convex optimization, with focus on adapting to unbounded domain and arbitrary time-varying comparator sequence. Different from previous studies which mostly consider the path-length as the prior which appears in the dynamic regret bound, this paper aims to enlarge the range of priors. For a given dictionary of orthogonal feature vectors, it is shown that an $\tilde{O}(\sqrt{E\cdot \text{Sparsity}_H})$ regret bound can be achieved, which not only recovers previous results on path-length, but also allows more flexibility. This paper also strictly improves the result of JC22 by replacing comparator-dependent terms in the regret bound by strictly smaller ones.
Strengths: This paper makes solid contributions to adaptive online convex optimization. It extends the scope of comparator adaptivity from the traditional path-length, to other priors on the comparator via a novel sparse coding framework. The achieved bounds involve more refined dependence on the comparator, and are never worse than previous results. In particular, the special case of the wavelet dictionary improves the SotA result in JC22.
Weaknesses: The main weakness is the unclear presentation of technical results, which are hard to fully understand and interpret. Though the claimed results seem promising, I have to vote for rejection for now because I can't verify the correctness without a better understanding of the results. Here are some confusions I met.
For the size 1 dictionary case, isn't the assumption too strong which seems to trivialize Lemma 2.1? In line 211, the comparator is assumed to lie in the span of a single vector $h_{1:T}$, which is just a scaling of the vector $h_{1:T}$. Since the dictionary is fixed and revealed to the player, the range of potential comparators seems very restricted (the degree of freedom is just one through $\hat{u}$).
The same issue carries over to the main result Theorem 1, which assumes the signal $z^{(n)}$ lies in the span of a single vector. Even if the reconstruction error creates certain flexibility, to achieve vanishing regret one still needs to guarantee the comparator is very close to $\sum_{n=1}^N z^{(n)}$, which seems under-expressive: for a given dictionary, the overall degree of freedom is just $n$, while for arbitrary time-varying comparators the degree of freedom is $dT$.
Two examples are presented right after the main theorem. The case of static regret is well understood. However, I feel the case of orthogonal dictionary requires more technical details, at least to readers unfamiliar with signal processing. For example, what exactly does the "orthogonal" means here? (which vector is orthogonal to which?) It would be great if you can provide an intuitive example here to show the power of your method over previous results.
I'm also concerned with how to choose the dictionary. It seems a good choice of dictionary requires prior knowledge. It's not clear to me if one can adapt to a class of dictionaries.
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: Can you provide more technical details to address my questions discussed above? In particular, can you provide 1: an example of dictionary that recovers the traditional path-length case, and 2: an example of dictionary that corresponds to some other prior different from path-length?
Can you provide details on using the Fourier dictionary to tackle periodic environments? It looks interesting to me because the naive idea of setting $H_t=(-1)^t I_d$ won't work.
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 3 good
Limitations: The presentation of technical results is not very clear.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your comments! We hope the following clarifications can answer your questions.
- Interpretation of the generic framework
Our generic framework aggregates a collection of single-feature learners. Roughly speaking, each of these single-feature learner is in charge of a fixed direction in the sequence space $\mathbb{R}^{dT}$. Aggregating them means assembling these fixed directions into a subspace in $\mathbb{R}^{dT}$, which the comparator sequence $u_{1:T}$ belongs to.
To recover static online learning, we need $N=d$ feature vectors. The dynamic setting is more challenging: in Theorem 1, if we want to completely eliminate the reconstruction error term $z^{0}$, then we have to use $dT$ feature vectors. As mentioned in your comment, this means setting $N=dT$.
You may wonder if this procedure is computationally efficient, since there are now $dT$ simple learners to aggregate. This is the reason why we focus on the wavelet dictionary in the second part of the paper: among the $O(T)$ simple learners, only $O(\log T)$ of them are active in any given round, therefore computationally their aggregation is not so expensive.
- Meaning of "orthogonal" and example
"Orthogonal dictionary" in this paper means that we consider a dictionary matrix $\mathcal{H}\in\mathbb{R}^{dT\times N}$ whose columns are orthogonal. An example is the Haar wavelet dictionary, e.g., Eq.(8).
To recover (and improve) traditional path-length bounds, we can use the Haar wavelet dictionary.
For a different inductive bias, the Discrete Fourier Transform (DFT) matrix is useful for periodic environments. Besides, the Daubechies wavelet family is a generalization of the Haar wavelet suitable for piecewise smooth environments.
- How to choose the dictionary, and adaptivity
The choice of the dictionary relies on (Bayesian) prior knowledge of the environment, which is a key idea of adaptive online learning, and machine learning in general. For *any* dynamic OCO algorithm, if the environment behaves very differently from the inductive bias of the algorithm (e.g., the environment is periodic, but the algorithm guarantees a path-length-based bound), then its performance cannot be good. Our framework is not an exception.
Meanwhile, it is indeed possible to "adapt" to the best dictionary in a certain category: given a collection of dictionaries, we can perform almost as if we know beforehand which dictionary is the best one. This is explained in our line 251 to 257. The idea is to simply combine all the dictionaries into a mega-dictionary, and run our framework verbatim. The dictionary-selection property follows from some nice behaviors of static comparator-adaptive online learning. Achieving this task without using the mega-dictionary is an interesting open question.
- Fourier dictionary
For periodic environments, the most natural idea is to use the Discrete Fourier Transform (DFT) matrix as the dictionary. The limitation is that the DFT matrix is dense itself, therefore computationally, this approach needs to aggregate $dT$ simple learners per round, which is computationally challenging.
In practice, a more appealing approach is to use a smaller dictionary defined from a base frequency and its low order harmonics. Specifically in the one-dimensional setting, given a base frequency $\omega$ and a maximum order $K$, we define two features for all $k\leq K$: the first has per-round component $h_t=\cos(k\omega t)\in\mathbb{R}$, and similarly, the second is $h_t=\sin(k\omega t)$. The base frequency $\omega$ is often determined by the natural periodicity of the environment, e.g., the weather and the traffic flow are roughly daily periodic. Details and supporting experiments are presented on the last page of the appendix. In the weather forecasting experiment, using a moderate amount of features ($K>3$) suffices, which quite significantly outperforms the baseline from [JC22].
---
Rebuttal Comment 1.1:
Title: Reply to Authors
Comment: Thank you for your detailed reply.
In my opinion, the general result seems less interesting than its special cases: to recover the general dynamic setting, the algorithm requires $dT$ feature vectors, which is unpractical. On the other hand, the Haar wavelet version does provide some solid improvements over previous results.
Though this work has certain limitations currently, the signal-processing perspective is novel and promising, which may open a new avenue for future research. I have updated my score accordingly.
---
Reply to Comment 1.1.1:
Comment: Thanks for carefully evaluating our work! | Summary: The problem that this paper tries to tackle is the unconstrained online convex optimization with dynamic regret. Previous works usually assume that the comparator sequence is arbitrary and maybe time-varying with some fixed form of comparator measurement in the final dynamic regret bound. For this paper, it proposes a new way of the comparator measurement by first using a pre-defined subspace (by the users) to characterize the comparator sequence and then upper bound the dynamic regret in terms of the pre-defined subspace complexity as well as the reconstruction error between the subspace and the comparator sequence. It shows that for the almost static environment, the proposed algorithm + wavelet constructed subspace could have better regret than the existing works' in (maybe) the constant part.
Strengths: 1) It provides the researchers a new perspective of measuring the comparator sequence when upper bounding the dynamic regret, which is interesting and useful in real applications.
2) The specific wavelet based subspace version could result in a regret better than the existing works' with (maybe) smaller constant part.
Weaknesses: 1) The proposed algorithm framework only moves the dynamic regret from previous works in the constant value part. And in order to compare the actual improvement in terms of the constant part, the author/s need to provide the complete regret result comparison instead of an order O() based one.
2) The paper result depends heavily on the existing work [MK20], and the subspace based comparator sequence measurement is the only part that makes it interesting, although previous works like [HW15, ZLZ18] has already shown such an idea as pointed out by the author/s.
3) The author/s explained the motivation of tackling unconstrained dynamic setting rather than the constrained one. But for the finite range argument in the paper, more often than not, the finite range usually comes from the requirement of the model output and not just some heuristic estimation. I think it's better to also provide some examples to demonstrate the motivation of this paper.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1) The proposed algorithm requires the user to provide the subspace, which is very critical for the final dynamic regret result. If the user provided subspace is not good enough, does that mean the resulted algorithm will have bad performance? If so, how to guarantee the performance?
2) Since the proposed algorithm depends on the Algorithm 3 a lot, do you think it may make more sense to move Algorithm 3 from the Appendix to the main paper?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your feedback!
- Improvement over [JC22].
We would like to respectfully clarify that compared to [JC22], our bound depends on a tighter complexity measure of the comparator sequence $u_{1:T}$. Such an improvement is considerably more substantial than improving the multiplicative constant of the conventional minimax regret bounds. In fact, multiplicative constants and logarithmic factors are omitted in our analysis. Example 1 and 2 demonstrate improvements on the *exponent* of $T$, rather than multiplying factors.
- Dependence on [MK20]
Our framework is actually very general: any static comparator adaptive OCO algorithm can be applied to replace [MK20], such as [MO14, OP16, CO18]. We only adopt [MK20] as an illustrative example.
- Difference with [HW15, ZLZ18]
As discussed in the paper, our framework is fundamentally different from [HW15, ZLZ18], as we use *linear combinations* of the side information, rather than *convex combinations*, to approximate the comparator sequence. This is the key reason behind its success, as it leads to natural connections to linear transforms, one of the most fundamental ideas of signal processing.
Besides, we would like to argue that despite bearing a natural idea, the analysis in this paper is a nontrivial one, and the strength of the wavelet result is quite surprising in our opinion.
- Bounded domain setting
If we are given a bounded domain, then there is a useful projection technique [Section 4, Cut20] that converts an unconstrained algorithm to a constrained one, without changing its regret bound. Therefore, our unconstrained dynamic regret bound also improves the standard bounded domain dynamic regret bound from [ZLZ18], modulo logarithmic factors.
Moreover, Appendix E discusses an application in fine-tuning time-series forecasters, which quite naturally motivates our unconstrained setting (due to its enhanced adaptivity). The relevant discussion is line 903 to 915.
- Dependence on the quality of the dictionary
Indeed, the performance of our generic framework depends on the quality of the dictionary. The rationale is a classical and natural one: without a good inductive bias from the dictionary, we cannot compete with a benchmark that "knows the future''. In a consistent manner, all the existing dynamic regret bounds are trivially $O(T)$ in their worst case.
With this, the main strength of our framework is its versatility: it is strictly more general than existing approaches without dictionaries. One could always pick the Haar wavelet dictionary, which leads to better quantitative bounds than the baselines, in almost static environments.
---
Rebuttal Comment 1.1:
Comment: I appreciate the rebuttal. I agreed that the regret bound depends on a tighter complexity measure of the comparator sequence. But the Eq.(7) on the sparsity claim is a bit exaggerated. Although it does have the sparsity indicated there, if taking the E term together, it's actually a reformulation of the numerator term from sparsity. Although I agree it's still tighter than previous works results, its improvement is not that significant. Since my novelty concerns are not well addressed, I will keep my score unchanged. | Summary: In this paper, the authors examine the dynamic regret of Online Convex Optimization (OCO) within the context of unbounded comparator sequences. To address this issue, they introduce a novel framework of sparse dictionary coding for online optimization. Following this, the authors provide theoretical proof for the regret bounds applicable to different types of dictionary matrices - the general dictionary matrix, orthogonal dictionary matrix, and the Haar wavelet dictionary matrix. Of notable mention is the result pertaining to the Haar wavelet dictionary matrix, where the authors establish a regret bound that surpasses the current state-of-the-art.
Strengths: - Originality & Significance: The introduction of the sparse coding framework for Online Convex Optimization (OCO), complemented with the corresponding proof, is highly innovative. Furthermore, considering the dynamic regret of OCO in the unbounded domain is very crucial. The result that the authors obtained is related to the comparator average, first-order variability, and second-order variability, rather than the path length. This indicates that the authors have achieved smaller dynamic regret under more adaptive conditions, thereby enriching optimization theory within the community.
- Quality & Clarity: The work is clear, well written, and technically sound.
Weaknesses: - Comparing the proposed method with the meta-expert optimistic online gradient descent method as described in [1] could be beneficial. Particularly in Examples 1 and 2, it appears that the meta-expert optimistic online gradient descent method can also achieve a regret of $\mathcal O(\sqrt T)$, considering the path-length is actually $\mathcal O(\sqrt T)$. This comparison might offer a broader perspective.
- I'm concerned about the computational complexity of $\mathcal O(d\log T)$ for high-dimensional problems, especially when compared to the $\mathcal O(\log T)$ complexity of both the meta-expert OGD and the meta-expert optimistic OGD methods. If possible, an in-depth discussion on this issue would be helpful for readers.
- Additionally, the main text seems to miss Algorithms 3-5. This omission may cause a minor confusion for readers during their first read. Rectifying this could help in enhancing the flow and clarity of the paper.
Ref: [1] P Zhao et.al., Adaptivity and Non-stationarity: Problem-dependent Dynamic Regret for Online Convex Optimization.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: See above discussions.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 4 excellent
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your feedback and your support of our paper!
- Related work [1]
Thanks for bringing it to our attention. Both [1] and our paper study how to achieve more adaptivity in dynamic online learning, but they take different directions. For example, [1] has an additional smoothness assumption, therefore the quantitative results are not directly comparable to ours. We'll discuss this related work in the camera ready version.
- Computational complexity of meta-expert OGD
It's possible that we missed its latest improvement: to our knowledge, the meta-expert OGD baseline [Ader, ZLZ18] also runs in $O(d\log T)$ time per round right? There are $O(\log T)$ base algorithms running in parallel, and each of them is OGD in $\mathbb{R}^d$ which runs in $O(d)$ time per round.
- Finally, thanks for the suggestions on the organization of this paper.
---
Rebuttal Comment 1.1:
Comment: I'd like to express my gratitude for the author's clarifications and comments. Indeed, the meta-expert OGD runs in $\mathcal O(d\log T)$ time per round. I acknowledge my mistake and appreciate the correction made by the author. | Summary: This paper studies the universal dynamic regret minimization problem with the unconstraint decision domain. The authors proposed the sparsing coding framework, which converts the dynamic regret minimization problem in the time domain into a static regret minimization problem in the transfer domain. The comparator-adaptive static regret bound in the transfer domain implies a dynamic regret bound in the time domain. Specifically, by choosing the dictionary as the Haar wavelet base, this paper achieves improved dynamic regret bound with better dependence on the range of the comparator sequence. Several concrete examples are provided to illustrate the superiority of the proposed bound.
Strengths: + This paper provides a novel framework to obtain universal dynamic regret bound via sparsing coding. The conversion from comparator-adaptive static regret bound to dynamic regret bound in the time domain is interesting to me.
+ This paper achieves improved universal dynamic regret bound with the new framework, which is strictly better than the existing results. Several examples are provided to illustrate the advantages of the proposed bound.
+ This paper is well-written and provides a sufficient discussion of the related literature.
Weaknesses: I do not find a major weakness in the paper, but there are still some minor comments:
- about the general formulation (Eq. (5)): it would be nice to mention that the dynamic regret bound (Eq.(5)) is not universal dynamic regret bound in general as it only holds for the comparator $u_{1:T}\in\mbox{span}(h_{1:T})$. The universal dynamic regret bound can only be achieved with an appropriate choice of the dictionary.
- about the examples: this paper has listed several examples with the specific choice of the comparator sequence to show the superiority of the proposed bound. However, since the main focus of this paper is the universal dynamic regret, it is unclear in which situation the listed comparator sequence is an appropriate benchmark that can minimize the right-hand side of Eq (2). I suggest the authors provide more concrete examples to illustrate the advantage of the proposed bound with certain specific loss functions.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: - could you provide more concrete examples to illustrate the superiority of the proposed bound? (please refer to the second point of the weakness for more details)
- is it possible to achieve the $\tilde{O}\left(\Vert \bar{u}\Vert\sqrt{T}+\sqrt{P\bar{S}}\right)$ bound without the knowledge of $T$. Could you highlight what is the main difference to obtain such a bound?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: One of the main limitations of the paper is that the tightness of the proposed bound is still unclear. The authors discuss this issue at the end of the paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your comments and your support of our paper!
- Clarification on Eq.(5).
Thanks for the suggestion, we will add the remark that the bound holds for $u_{1:T}$ in the span of the feature vectors.
- Example of the loss functions.
The question is on whether there exist loss functions $l_{1:T}$ such that the comparators $u_{1:T}$ from Example 1 and 2 are actually good comparators with low cumulative loss. This is a great point, and let us use time series forecasting as an example (Appendix E).
Consider the true time series $z_{1:T}$ being the sequences from our Example 1 and 2, and the loss functions in OCO are the absolute loss, $l_t(x)=|x-z_t|$. As the comparator sequence, the true time series $z_{1:T}$ suffers zero loss, therefore the total loss of our OCO algorithm is upper bounded by its regret bound with respect to $z_{1:T}$. In this case, our improved regret bound over [JC22] translates to a smaller total loss bound.
Essentially, this example shows that the *restricted dynamic regret bound* obtained from our universal bound improves the one obtained from [JC22]. Such an argument is a relaxation of the oracle inequality Eq.(2), and a natural next step is to directly characterize the infimum on the RHS there. To our knowledge, this is a less studied topic within comparator adaptive online learning, which could be a good direction for future works.
- Anytime $\tilde O\left(||\bar u||_2\sqrt{T}+\sqrt{P\bar S}\right)$ bound
This can indeed be achieved from our core fixed-$T$ result, which we realized after the submission. Details are the following. We will add this small improvement to the camera ready version.
In our current proof, we have (line 801 and 802),
$
\mathrm{Regret}(u_{1:T})\leq \tilde O\left(\sum_{m=1}^{m^*}||\bar u_m||_2\sqrt{2^m}\right)+\tilde O\left(\sqrt{P \bar S}\right).
$
For the first sum on the RHS, same as our proof of Theorem 2 (line 722 to 727),
$
\sum_{m=1}^{m^*}||\bar u_m||_2\sqrt{2^m}\leq \tilde O\left(||\bar u||_2\sqrt{T}+\sqrt{\bar E}\right).
$
The remaining task is to show that $\sqrt{\bar E}$ is dominated by $\sqrt{P\bar S}$, thus can be combined into the later. Plugging in their definitions, it suffices to show that for all $t$, $||u_t-\bar u||_2\leq P$. This follows from
$
||u_t-\bar u||_2 \leq T^{-1} \sum_i ||u_t-u_i||_2\leq \max_i ||u_t-u_i||_2,
$
and for all $i,t\in[1:T]$, $||u_t-u_i||_2\leq P$. | null | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Rewarded soups: towards Pareto-optimal alignment by interpolating weights fine-tuned on diverse rewards | Accept (poster) | Summary: The paper proposes rewarded soup (RS), a simple technique for combining policies trained for different rewards into a single policy performing well on a particular convex combination of those rewards. The technique consists in linearly interpolating weights of individual policies, using the fact that they share the same initialisation and remain linearly connected during finetuning on different rewards. The approach is well-motivated theoretically and thoroughly evaluated on a diverse array of tasks ranging from language generation, image captioning and image generation to robot locomotion.
Strengths: 1. The paper addresses crucial problems of accounting for diverse preferences and adapting to changes in reward specification that frequently arise in the emerging and important field of aligning generative models with human preferences.
2. Rewarded soup is well-motivated theoretically as a Pareto coverage set of policies for linear combinations of individual reward functions. I found working hypotheses 1 and 2 to be very helpful in understanding how RS works. I’m also convinced by empirical evidence for these hypotheses being true.
3. The fact that linear mode connectivity also holds for RL policies trained for different rewards is an interesting finding about deep learning overall. I found it somewhat surprising that interpolated weights outperform the interpolated rewards so consistently.
4. The paper is well-written and easy to follow despite being very dense. The theory part connects with experiments very well. I also appreciate the plots (e.g. Figure 2) being easy to navigate while conveying a lot of information.
5. The experiments are very thorough and diverse and I find them compelling.
Weaknesses: I think the discussion of reward misspecification could be more nuanced. I think the claims that RS “mitigates reward misspecification” (line 75) and “If Hypothesis 2 is true, then RS can mitigate reward misspecification” (line 161) should be framed a bit more cautiously, making it clear that it’s a very particular kind of reward misspecification: when the real reward is linear in a set of proxy rewards. I don’t think this is representative of most kinds of reward misspecification that we see and that we should me worried about such as context-dependence of human preferences or biases of data workers providing feedbacks.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: 1. What’s the relation between scatter points and curves on the plots: are curves a smoothed based on scatter points? What smoothing technique did you use?
2. How much does the performance of RS depend on policies being close to their shared base models? How does the RS front evolve over the course of finetuning; does it degenerate after some number of gradient updates?
3. Relatedly, does using parameter-efficient finetuning (e.g. LoRA) play a role? In the paper, we claim you use LoRA only for computational efficiency reasons, but shouldn’t it also significantly help to maintain linear mode connectivity? Does RS work without LoRA?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 3 good
Limitations: Limitations and societal impacts are discussed thoroughly.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We would like to thank R.TSwH for this positive review and the great understanding of the empirical and theoretical components of our work.
---
### Q1. Reward misspecification
In the revised version of the paper, we will clarify the discussion l.75 and l.161 on reward misspecification being mostly for linear rewards. Yet, please note that in Figure 4b (and in Figure 9 from Appendix D.2) we actually observe that "despite the lack of theoretical guarantees, weight interpolation improves results **even for non-linear reward**" (l.167). We speculate RS actually maximizes the projection of the user's reward on the linear subspaces defined by the different proxy rewards.
---
### Q2. Smoothing functions in plots
The curves fit the points with a **Savitzky-Golay smoothing filter** (inspired from this [blog](https://www.datatechnotes.com/2022/05/smoothing-example-with-savitzky-golay.html)) and a **quadratic interpolation** (inspired from this [stack overflow](https://stackoverflow.com/questions/52014197/how-to-interpolate-a-2d-curve-in-python)). The code is detailed below.
```python
import numpy as np
from scipy.interpolate import interp1d
from scipy.signal import savgol_filter
def smoothing(x, y):
x_smooth = savgol_filter(x, 3, 1)
x_smooth[0], x_smooth[-1] = x[0], x[-1]
y_smooth = savgol_filter(y, 3, 1)
y_smooth[0], y_smooth[-1] = y[0], y[-1]
points = np.array([x_smooth, y_smooth]).T
distance = np.cumsum(np.sqrt(np.sum(np.diff(points, axis=0)**2, axis=1)))
distance = np.insert(distance, 0, 0) / distance[-1]
alpha = np.linspace(0, 1, 75)
interpolator = interp1d(distance, points, kind="quadratic", axis=0)
curve = interpolator(alpha)
return curve.T
```
---
### Q3. How much does the performance of RS depend on policies being close to their shared base models? How does the RS front evolve over the course of finetuning; does it degenerate after some number of gradient updates? (see [R.ntSF.Q1](https://openreview.net/forum?id=lSbbC2VyCu¬eId=UEM7DRMGYu))
As detailed in Remark 1, "when the weights remain close, we can theoretically justify Hypotheses 1 and 2 (see Appendix B.2), and, more broadly, demonstrate that WI approximates ensembling (see Lemma 4 [in Appendix B.3])" (l.146). In other words, good performances are guaranteed when weights are close; thus longer trainings may be worrisome, as the models may potentially diverge in the weight space.
We investigate this question in the one-page rebuttal pdf, for the news summarization task (in Figure 3.a) and for the captioning task (in Figure 3.b); we double the number of training steps, and report multiple RS fronts over the course of fine-tuning.
Fortunately, we **consistently observe good performances for RS along the full fine-tuning**, confirming that the pre-trained initialization is sufficient to enforce the LMC, and validating the insights from previous works [Neyshabur2020] in supervised learning.
[Neyshabur2020] What is being transferred in transfer learning? NeurIPS.
---
### Q4. Does RS work without LoRA?
Actually, **most of our experiments are without LoRA**.
- for the image generation task, we fine-tune 10% of the weights of the diffusion model, "corresponding to the cross-attention layers and the bias/scaling parameters" (l.1047).
- for the visual grounding task, we fine-tune the transformer end-to-end.
- for the locomotion task, we fine-tune the MLP end-to-end.
- for the captioning task, we usually fine-tune the text decoder with the convolutional visual encoder frozen, but we show in Figure 10.d that RS convexity is actually even better when training end-to-end.
Therefore, we argue that RS is agnostic to the "parameterization" strategy in training.
As a final note, [Li2022] have observed in NLP that weight interpolation works even better in larger architectures. Intuitively, a large number of parameters may facilitate the orthogonality of the fine-tuned updates observed in [Ilharco2023], which "speculate that this [orthogonality] enables the combination of task vectors via addition with minimal interference". This fact may explain why end-to-end fine-tuning in captioning provides better convexity in Figure 10.d than when keeping the visual encoder frozen in Figure 3.a. Moreover, this insight suggests that, as LoRA reduces the number of trainable weights, **performances might actually get better with full end-to-end fine-tuning than with LoRA** (as currently done in our text-to-text experiments). This is a promising research direction for future work.
[Li2022] Branch-Train-Merge: Embarrassingly parallel training of expert language models.\
[Ilharco2023] Editing models with task arithmetic. ICLR.
---
Thank you once more for your feedback. We remain open to further suggestions and discussions.
---
Rebuttal Comment 1.1:
Comment: Thanks for the detailed response! I appreciate the experiment involving finetuning for more gradient steps. I stand by my (high) score. | Summary: This paper presents reward soups (RS) which is the idea of starting with a pre-trained network, which is finetuned to multiple proxy rewards (say, multiple different criteria), and at test time, infers a reward as a linear combination of these proxy rewards and uses this to linearly combine the corresponding weights which is then used for prediction/generation. In contrast to naive variants of multi-objective RL which trains many different policies (far greater than the number of proxy rewards) to obtain a high fidelity policy for preferences encountered at test time, the proposed approach ends up working while training a number of policies that is equal to the number of proxy rewards while showing reasonable empirical performance.
==> post rebuttal: updated score.
Strengths: The problem formation and proposed approach are topics of increasing interest and relevance to the community. The paper presents interesting results for many practically relevant and useful benchmarks.
Weaknesses: - There are not much comparisons to approaches in multi-objective RL, which makes it unclear as to how one can imagine this paper's result to be approaching notions of pareto optimal trade-offs. While the paper acknowledges this issue, it leaves open huge gaps as to what can be achieved using a suite of approaches that exist in multi-objective RL (for e.g. even starting from reference 130 cited in this paper).
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: Something that appears unclear is that linear mode connectivity etc. typically make sense in the context of supervised learning when training with the same set of labels. Here, the paper attempts to argue a similar perspective with different sets of labels, and in the RLHF context. Attempting to argue about both of these (particularly the latter) is non-trivial sense saying that a linear combination of parameters of a policy network optimizes the long term (i.e. generation level) linear combination of reward posits very strong assumptions on the structure of the optimal policy etc. for which I do not yet see a clear argument in this paper.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 3 good
Contribution: 2 fair
Limitations: yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank R.bXWy for reviewing our work. Yet, with all due respect, there is an inaccuracy in the summary by R.bXWy: "at test time, [we do **not**] infer a reward as a linear combination of these proxy rewards". More precisely, we show that interpolated weights can approximate the optimal policy for interpolated rewards, both empirically (in Section 3) and theoretically (in Appendix B.2), as detailed below.
---
### Q1. Towards Pareto-optimality and empirical comparisons with other MORL strategies
As stated l.292, "when dealing with multiple objectives in deep learning, the common strategy is to combine them into a single reward"; in particular, the linear MORL is now standard to train LLMs with RLHF [Glaese2022]. Thus, "as the true Pareto front is unknown in real-world applications" (l.177), we use this linear MORL as the reference to evaluate Pareto-optimality. As explained in [R.QPfR.Q1](https://openreview.net/forum?id=lSbbC2VyCu¬eId=58jAIy2tXU), the key reason is that: "in full generality, improvements in initialization, RL algorithms, data, or specific hyperparameters could enhance performances. [Thus] the Pareto front [...] needs to be defined with regard to a training procedure" (l.151). Our conclusion was that RS is an empirical solution **towards** Pareto-optimality, with a limitation highlighted in the paper's name.
Now, regarding the other MORL strategies, please note that **they are not practical** for large-scale experiments, as acknowledged by R.stHc who stated: "compared with previous work, [our approach is] much more applicable and flexible to complex application scenarios". For example, (i) "these works are mostly for **academic benchmarks**" (l.299) or "games such as ATARI" (l.890), and (ii) none have been used for RLHF, for fine-tuning foundation models, or for deep networks with billions of parameters.
Critically, their implementations are complex, as most introduce **specific hyperparameters** or even "**modify the training procedure**" (l.300); for example, the reference 130 [Yang2019] requires a change in Bellman equations. In contrast, RS requires zero modification to the optimization algorithm (such as PPO), and thus can be used on top of any RLHF system (such as [TRL](https://github.com/lvwerra/trl)). If R.bXWy is aware of any open-source implementation of any MORL algorithm for RLHF of LLMs, we would run the experiments, report the numbers, but also verify if the LMC holds and **apply RS on top of those refined solutions**.
Finally, **performance and simplicity are not the only advantages of RS over other MORL strategies, as discussed at length in Appendix A.2**. In brief, RS "is compatible with the inherent iterative engineering process of alignment" (l.890): "RS can continually include adjusted opinions while preventing forgetting of the old behaviours" (l.891). For example, if a new reward is defined, RS requires one single additional training, whereas the other MORL would require starting again from scratch.
[Glaese2022] Improving alignment of dialogue agents via targeted human judgements.\
[Yang2019] A generalized algorithm for multi objective reinforcement learning and policy adaptation. NeurIPS.
---
### Q2. Linear mode connectivity and theoretical guarantees for the near-optimality of RS
Our Hypothesis 1 tries to properly define the LMC when considering multiple metrics: R.TSwH "found [it] to be very helpful in understanding how RS works". Its empirical validation was arguably far from obvious; yet, we consistently obtain positive results in Section 3, for various setups and scenarios, even for generation task involving long term dependencies such as text-to-text with LLaMA, or image generation with diffusion models. Then, we state l.322: "RS relies on an empirical finding: the LMC, which currently lacks full theoretical guarantees [in our complex RL setup with multiple rewards, but actually] even in the simplest case of moving averages [Izmailov2018]" in supervised learning with one single set of labels.
However, we'd like to respectfully emphasize that **we do provide a theoretical and novel "argument in this paper"**; in Appendix B.2 "we provide theoretical guarantees for the near-optimality of RS when considering quadratic rewards" (l.908). This is referenced in the main paper l.146 in Remark 1 and also l.141-143, where we state: "we theoretically prove in Appendix B.2 [that our Hypotheses 1 and 2] approximately hold when rewards are replaced by their second-order Taylor expansion with co-diagonalizable Hessians, a simplified setup justifiable when weights remain close", and a common assumption in deep learning (as argued in Remark 4). Specifically, considering a linear preference $\hat{\mu}$ over two rewards $R_1$ and $R_2$, Lemma 3 bounds the difference $\Delta R_{\hat{\mu}}$ between the rewards obtained by (i) the optimal policy and (ii) our interpolated solution by:
$$ \Delta R_{\hat{\mu}} \leq \frac{\hat{\mu}^2(1-\hat{\mu})^2(M \Delta_1 - \Delta_2)(M \Delta_2 - \Delta_1)}{\left(\hat{\mu}(1-\hat{\mu})(M-1)^2 + M\right)\left(\left( 1 - \hat{\mu}\right) \Delta_1 + \hat{\mu} \Delta_2\right)},$$
where $M$ is the maximum of eigenvalues ratio across the Hessians of the two rewards, $\Delta_1 = R_1(\theta_1) - R_1(\theta_2)$ and $\Delta_2 = R_2(\theta_2) - R_2(\theta_1)$. This bound is illustrated in Figure 7.
In conclusion, we **provide guarantees with assumptions on the rewards** being quadratic and co-diagonalizable, thus with indirect "assumptions on the structure of the optimal policy" (R.bXWy). This is acknowledged by R.TSwH who stated: "the approach is well-motivated theoretically" and that "the theory part connects with experiments very well". This theoretical analysis will be put forward in the revision.
[Izmailov2018] Averaging weights leads to wider optima and better generalization. UAI.
---
If this clarifies our empirical and theoretical analyses, we would be extremely grateful if R.bXWy could update their review accordingly.
---
Rebuttal Comment 1.1:
Title: Response to author's rebuttal
Comment: Thank you for your response. I understand the relative merits offered by a simple strategy such as rewarded soups; while there are advantages to RS from the perspective of issues such as alignment, re-training, forgetting etc. that appear to be interesting, I was looking to understand what this method offered compared to an RL solution, and that hasn't been adequately addressed by the rebuttal.
1. I find notions of approximate optimality to not be well qualified without understanding what is achievable. This can be obtained through benchmarking some multi-objective RL methods which are directly related to the contributions of this paper. Since the method described in the paper is obviously simple, which is a positive, it will obviously serve the interest of the broader community to attempt to benchmark MORL methods from the literature. If this still is prohibitively infeasible as the authors appear to indicate, consider two possibilities:
(a) I think running an RL method on a few grid points as a reward function and understanding what fraction of headroom is left out using this strategy of reward soups is necessary to round up on how good the proposed approach is. With regards to results that are presented in the paper in this front, why does running RL on the scalarized reward sometimes appear to be inferior to RS when RL can pretty much realize the RS solution?
(b) It will still be interesting to understand rewarded soups a bit more on locomotion tasks, where there is broad precedent of running multi-objective RL methods
Please note that RL is one of the central approaches that has helped improve alignment, and this disconnect with RL literature is a gap that I believe falls well within this paper's scope to be addressed. If the authors can address this comment (for instance, as they have presented results with (a)), I am open to improving my score.
2. Thanks for pointing to this. Again, can the authors clarify how sub-optimal the RS policy will be as a function of generation length and how this is captured by the bound?
---
Reply to Comment 1.1.1:
Comment: We would like to thank R.bXWy for acknowledging our rebuttal, and the merits offered by the simplicity of our strategy. The new comment suggests remaining concerns, that we try to clarify and answer below.
---
### "I find notions of approximate optimality to not be well qualified without understanding what is achievable. This can be obtained through benchmarking some MORL methods"
Given a fixed preference $\hat{\mu}$ between two rewards $R_1$ and $R_2$, we would like to compare our RS policy to an oracle (but unavailable) policy maximizing $(1-\hat{\mu})R_1 + \hat{\mu} R_2$ in **test**. In practice, we believe that a sensible and a competitive approach is considering the model fine-tuned to maximize $(1-\hat{\mu})R_1 + \hat{\mu} R_2$ in **train**. This linearized MORL is then our reference to evaluate optimality.
We want to clarify that the MORL references [124-133] from the related work Section 4 aim at efficiency, but (as far as we know) usually do not claim to consistently beat this linearized MORL.
The only approaches that might improve performances are actually the references [168-171] from Appendix A.2, such as [Yu2020], that tackle gradient conflicts and different variance scales across tasks.
Their contributions are orthogonal to our paper; yet, for the sake of completeness and to fill the gap between the RL and the alignment literature, we will include results for [Yu2020] on the locomotion task in the revision.
If you think of any MORL method that might reveal a front better than RS's front, please tell us.
[Yu2020] Gradient surgery for multi-task learning. NeurIPS.
---
### "running an RL method on a few grid points as a reward function and understanding what fraction of headroom is left out"
We do something very similar when we quantitatively compare the hypervolume of RS to the hypervolume of the linearized MORL.
Specifically, we take a few grid points with different interpolating coefficients ($\lambda$ for RS and $\mu$ for MORL), and then we compute their hypervolume, i.e., "the area over the curve wrt an optimal point" (l.198). This hypervolume helps measuring "what fraction of headroom is left out". We observe that "RS’s hypervolume is 0.367 vs. 0.340 for MORL in Figure 2(a), while it is 1.176 vs. 1.186 in Figure 2(b)" (l.199) in the summarization experiments, and that RS and MORL have the exact "same hypervolume of 0.140" (l.216) in the captioning experiment.
---
### "With regards to results that are presented in the paper in this front, why does running RL on the scalarized reward sometimes appear to be inferior to RS when RL can pretty much realize the RS solution?"
Indeed, we observe a few times that the RS solutions are above the linearized MORL solutions. We speculate this is related to the multiple benefits of weight interpolation.
- the main benefit that we discuss in our paper is the ability to interpolate between different policies. From this benefit, we would expect RS to perform similarly to MORL.
- the second benefit from weight averaging is the implicit regularization, causing variance reduction and stabilizing performances. This is the main focus of the traditional weight averaging literature, for example in model soups. We speculate that this second benefit (combined with the first) can explain why RS sometimes outperforms MORL.
---
### How sub-optimal the RS policy will be as a function of generation length and how this is captured by the bound?
We define l.82-84 "a policy by mapping inputs $x$ to $f(x, \theta)$ when parametrized by $\theta$. For a reward $\hat{R}$ [...] our goal is to maximize $\int_{x \in T} \hat{R}(f\left(x, \theta\right))$". In this setup, the $f$ includes the architecture choices, but also other design choices, such as (in our text experiments) the tokenization, the decoding strategy, and in particular the generation length.
Then our Lemma 3 bounds the reward difference obtained for policies with (i) an optimal $\theta^*$ and (ii) RS interpolated weights.
Critically, the rewards in the bound are obtained at fixed generation length for both policies.
More generally, for fair comparison, our experiments are at fixed network and fixed training/inference procedures.
Yet we acknowledge l.151 that "in full generality, improvements in initialization, RL algorithms, data, or specific hyperparameters could enhance performances". The key point is that RS could totally benefit from those improvements, for example longer generations.
Actually, we validate this insight empirically for news summarization, when doubling the generation length at inference, from 32 to 64.
We report below the scores for MORL with $\mu=0.5$ and for RS with $\lambda=0.5$.
These results will be refined and included in the revision.
| | MORL | MORL | RS | RS |
|--------|-------|-------|-------|-------|
| Length | 32 | 64 | 32 | 64 |
| $R_1$ | 1.31 | 1.27 | 1.45 | 1.47 |
| $R_2$ | -1.00 | -0.95 | -1.11 | -1.03 | | Summary: In this paper, the authors propose a multi-policy strategy called "rewarded soups" to fine-tune any foundation model, embracing the heterogeneity of diverse rewards. The method combines multiple networks through linear interpolation in the weight space, despite the non-linearities in the network, which efficiently yields Pareto-optimal solutions after training. The authors demonstrate the effectiveness of the approach for text-to-text, text-image, and locomotion control tasks, showing that "rewarded soups" can mitigate reward misspecification. The proposed approach aims to enhance the alignment of deep models and how they interact with the world in all its diversity. The authors highlight the issue of aligning AI systems to specific and diverse needs while making the process more transparent and limiting the cultural hegemony of a few individuals.
Strengths: This paper addresses the reward misspecification problem caused by single priori rewards in current RLHF frameworks for foundation models. In order to solve the problem, this paper proposes a relatively complete framework called rewarded soup (RS). RS combines multiple networks (fine-tuning on different proxy rewards) through linear interpolation in the weight space and selects relative coefficients according to the user’s preferences, yielding Pareto-optimal solutions. The content of the whole paper is complete, and the experiments are sufficient.
Weaknesses: 1. The writing logic of the article could be more coherent in the reviewer's opinion. For example, in 3.3, "Moreover, RS gives a better front than MORL, validating Hypothesis 2." and in 3.5, "Moreover, the front defined by RS indicates an effective balance between risk-taking and cautiousness, providing empirical support for Hypothesis 2, although MORL with $\mu$ = 0.5 (i.e., $\alpha$ = 0.5) slightly surpasses RS's front."
2. Validations on Hypothesis 1 and 2 are based on empirical results. The authors are encouraged to give some theoretical analysis.
3. Lack of experiments on computational costs. According to the results, the proposed method has no strengths compared to MORL. The authors state that RS can reduce computational costs. However, no relative experiments showed in this paper.
4. Most experiments are assigned an N=2 reward model, which is not aligned enough with the "diverse" in the title.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: 1. Why does the front passing through the point obtained by MORL fine-tuning on the average of the two rewards support Hypothesis 2 in Figure 2?
2. Some formulas need clarification. For example, is $\lbrace\lambda_i\rbrace_{i}$ equals to $\lbrace\lambda_i\rbrace_{i=1}^N$?
3. $\lbrace\lambda_i\rbrace_{i=1}^N$ is selected by users according to their preferences. How do users select these coefficients? For example, the user will give a preference label over pair-wise (or k-wise) instances in the standard RLHF paradigm.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 2 fair
Limitations: The authors have adequately addressed the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank R.QPfR for reviewing our work, and try to address the expressed concerns below.
---
### Q1. Empirical validation of Hypothesis 2
Our introduction in Section 3 and the Remark 2 explain why "the front passing through the point obtained by MORL fine-tuning on the average of the two rewards support Hyp. 2" (R.QPfR). This is because we use **this MORL as the reference to evaluate the Pareto-optimality of RS**. Specifically, we state l.176 that: "as the true Pareto front is unknown in real-world applications, we present empirical support for Hyp. 2 by comparing the front defined by RS (sliding $\lambda$ between $0$ and $1$) to the MORL's solutions optimizing the $\mu$-weighted rewards for $0\leq\mu\leq 1$ (sometimes only $\mu=0.5$ for computational reasons)". We provide more details l.151: "in full generality, improvements in initialization, RL algorithms, data, or specific hyperparameters could enhance performances. [Thus] the true Pareto front [...] needs to be defined with regard to a training procedure. [...] As such, in Section 3 [and Figure 2] we analyze Hyp. 2 by comparing the fronts obtained by RS and scalarized MORL".
Then our experiments consistently show that MORL and RS perform similarly (with minor differences in different setups). Overall, in absence of the true Pareto front, this process provides empirical support **towards** Pareto-optimality of RS, with indeed a limitation highlighted in the paper's name.
---
### Q2. Theoretical analysis (extended in [R.bXWy.Q2](https://openreview.net/forum?id=lSbbC2VyCu¬eId=rSiwrlT8Be))
Indeed, the *full validation* of "Hyp. 1 and 2 are based on empirical results" (R.QPfR). That"s why we state l.322: "RS relies on an empirical finding: the LMC, which currently lacks full theoretical guarantees, even in the simplest case of moving averages" in supervised learning.
Yet, we respectfully disagree with R.QPfR as **our work already gives theoretical analysis**, in particular in Appendix B.2 where "we provide theoretical guarantees for the near-optimality of RS when considering quadratic rewards" (l.908). Specifically, in Lemma 3, we bound the reward difference between the optimal policy and our interpolated policy. This is referenced in the main paper at two different places, where we state: (i) l.141-143 "we theoretically prove in Appendix B.2 [that our Hypotheses 1 and 2] approximately hold when rewards are replaced by their second-order Taylor expansion with co-diagonalizable Hessians"; and (ii) l.146 "when the weights remain close, we can theoretically justify Hypotheses 1 and 2 (see Appendix B.2) and, more broadly, demonstrate that WI approximates ensembling (see Lemma 4)".
---
### Q3. Computational costs (close duplicate of [R.bJvT.Q3](https://openreview.net/forum?id=lSbbC2VyCu¬eId=Hztsf2jtoo))
RS reduces the computational costs of other MORL strategies; eg, as stated in Figure 1.b, "with only two trainings [RS] reveals the green front of Pareto-optimal solutions [...] and matches the costly yellow front of MORL requiring [11] trainings on different linear weightings". As a side note, truly revealing the full MORL front would actually require an infinite number of trainings. Thus, we argue that this **efficiency gain is by design**; when considering $N$ rewards, RS only requires $M=N$ fine-tunings, while MORL "requires explicitly maintaining a large set $M\gg N$ networks, practically one for each possible preference" (l.105).
To quantify the efficiency gain of RS, in Figure 2 (from the one-page rebuttal pdf) we define a new measure of success; the expected reward $E_{\hat{\mu}\sim Unif\left(0,1\right)} \hat{R_{\hat{\mu}}}$ where $\hat{R_{\hat{\mu}}} = (1-\hat{\mu})\times R_1 + \hat{\mu} \times R_2$ and the expectation is over all the possible user's linear preferences $\hat{\mu}$ over the $N=2$ rewards. Then we compute the difference between (i) the expected reward for RS (always with $2$ training runs), and (ii) the expected reward for MORL with $M$ training runs. **Plotting this expected reward advantage for different values of $M$ confirms that MORL needs $M \gg 2$ to match RS**. Moreover, because of the dimensional curse, we expect the number of MORL trainings required to match RS to grow exponentially with the number of rewards $N$. In conclusion, these new experiments quantitatively validate that RS is more efficient than MORL, and will be included in the revised paper.
---
### Q4. Number of rewards (extended in [R.ntSF.Q2](https://openreview.net/forum?id=lSbbC2VyCu¬eId=UEM7DRMGYu))
For visualization clarity, the Pareto fronts were for $N=2$ rewards, one of the $x$-axis, the other on the $y$-axis. Yet, "RS can scale and trade-off between more rewards" (l.201). We validate this in the spider maps from Figure 2.f (for text generation), from Figure 3.c (for image captioning, adapted in Figure 1.c and 1.d in the rebuttal), and from Figure 5.c (for visual grounding), where we respectively consider $4$, $5$ and $3$ different rewards.
---
### Q5. Formulas clarity
Yes, both formulas refer to $\\{ \lambda_i \\}_{i=1}^N$. Bounds will be explicited in the revision.
---
### Q6. Selecting the $\lambda$
As detailed l.163, and latter l. 223, we already **consider two practical strategies to select the values** of the interpolating coefficients $\lambda$:
1. if the user defines a linear preference $\hat{\mu}$, we can select $\lambda=\hat{\mu}$ .
2. if the user provides some labelled validation samples, we can cross-validate $\lambda$.
We validate in Figure 4.a that both strategies perform well. If the user only provides preference comparisons (as suggested by R.QPfR), we could indeed select the $\lambda$ similarly as in reward modeling in RLHF.
---
We would greatly appreciate it if you took those clarifications below into account during discussions. | Summary: This paper explores a model-soup strategy to efficiently adapt to diverse reward functions from various real-world users. By fine-tuning a pre-trained LLM multiple times each with a specialized reward function and interpolating their weights linearly, the proposed method is able to adapt to various reward functions without having to train a new LLM per user.
Strengths: - The proposed method is much more efficient than the baselines, which have to train a separate model when a new reward function is given.
- The evaluation is through and conducted on a variety of LLM tasks.
- The performance of the proposed method does not fall behind compared to the more costly baselines.
Weaknesses: - The novelty of the paper is weak. It seems the main contribution of the paper is applying the weight interpolation (model soup [1]) technique, which was well-explored in supervised learning, to RLHF. I suggest the authors clarify the paper's novelty (other than applying the model-soup technique to another domain) more clearly.
- The authors point out that in RLHF, the reward function is different per each model, unlike supervised learning where the training objective is the same. I agree with the authors on this point, but the rewards used in the paper's experiments do not seem very heterogeneous to back up the authors' claim. A more realistic scenario would be to experiment with a set of rewards that contradict each other directly (e.g., different reward functions learned from users with conflicting interests).
- The main strength of the proposed method is that it is more efficient in terms of training (fine-tuning) cost and inference cost. However, the paper does not provide a quantitative comparison of these costs between the proposed method and the baselines. Therefore, it is hard to assess how much efficiency gain the proposed method will provide.
[1] Wortsman et al., Model soups: averaging weights of multiple fine-tuned models improves accuracy without increasing inference time, ICML 2022.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: Noted above.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 2 fair
Limitations: Noted above.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank R.bJvT for reviewing our work, and try to address the expressed concerns below.
---
### Q1. Novelty and difference with model soups (extended in [R.DNE5.Q1](https://openreview.net/forum?id=lSbbC2VyCu¬eId=6LMJAJD6vx))
The first conceptual novelty is arguing for a **multi-objective paradigm** to align deep models with human preferences and reduce reward misspecification. The second empirical novelty is observing new setups/conditions where the **linear mode connectivity** holds, and thus where weight interpolation can be used; eg, in reinforcement learning with different rewards, for multimodal tasks. This weight interpolation strategy was indeed used in model soups (MS). Yet, we want to clarify that **RS and MS tackle different problems, have different goals, leading to different methods and implementations**.
- MS challenges the standard model selection after a grid search to improve generalization in supervised learning, and aims at reducing the variance of the predictions by combining the fine-tuned models: thus MS considers different hyperparameters for a fixed training objective across runs, and (usually) uniform interpolating coefficients $\lambda=\frac{1}{M}$.
- In contrast, RS challenges single-policy approaches to improve alignment in reinforcement learning, and aims at reducing reward misspecification by revealing a Pareto front of solutions across the entire space of preferences: thus RS considers different training objectives for fixed hyperparameters across runs, and non-uniform interpolating coefficients $\lambda$ set a posteriori.
Overall, these differences mean that **RS can but MS cannot reduce reward misspecification**. We will clarify this difference between RS and MS in the revised version.
---
### Q2. Rewards diversity
We respectfully disagree with R.bJvT, and argue that **we already use diverse and heterogeneous rewards that are in tension**. For example:
- for the summarization tasks (in Figure 1.b, 2.a and 2.b): $R_1$ rewards completeness, while $R_2$ focuses on "faithfulness" (l.1005).
- for the captioning experiments (in Figure 3.a and 8.b): BLEU1 measures accuracy while ROUGE captures recall (see l.42), and METEOR handles synonyms.
- for the visual grounding experiments (in Figure 5.b and 14), the different rewards consider objects of different sizes.
The dissimilarities between these rewards are quantitatively validated by our experiments; when fine-tuning on one reward, the performances are usually worsened on the others.
For example, for captioning "tuning solely BLEU1 sacrifices some points on ROUGE" (l.213); for visual grounding "optimizing for small objects degrades performance on large ones" (l.259).
These examples are arguably representative of "different reward functions learned from users with conflicting interests" (R.bJvT).
Yet, we acknowledge (in Appendix E.2 and in our response to [R.ntSF.Q1](https://openreview.net/forum?id=lSbbC2VyCu¬eId=UEM7DRMGYu)) some "limitations of weight interpolation when combining antagonist rewards" (l.1059). This was suggested by the results for text-to-image generation in Figure 10, where RS underperforms MORL when considering a *nsfw* reward "very different from aesthetic preferences" (l.1057) and "inversely correlated with image quality" (l.1058). This limitation will be clarified in the limitation section from the revision. However, we want to emphasize that, in this kind of situation with fully antagonist rewards, the **complementarity of MORL and RS is a promising research direction**, as discussed l.1060: "an improved strategy would first learn the MORL [...], and then optimize each reward independently from this improved [MORL] initialization, before applying RS". As another research direction, we suggest (in the legend from Figure 10.a) that: "adding the MORL solutions as intermediate weights may help interpolate between two weights too distant".
---
### Q3. Quantify the efficiency gain (close duplicate of [R.QPfR.Q3](https://openreview.net/forum?id=lSbbC2VyCu¬eId=58jAIy2tXU))
Indeed, "the main strength of the proposed method is that it is more efficient in terms of training (fine-tuning) cost" (R.bJvT) than the MORL baseline. For example, as stated in the legend from Figure 1.b, "with only two trainings [RS] reveals the green front of Pareto-optimal solutions [...] and matches the costly yellow front of MORL requiring [11] trainings on different linear weightings".
As a side note, truly revealing the full MORL front would actually require an infinite number of trainings.
Therefore, we argue that this **efficiency gain is by design**; when considering $N$ rewards, RS only requires $M=N$ fine-tunings, while MORL "requires explicitly maintaining a large set $M \gg N$ networks, practically one for each possible preference" (l.105).
Indeed, as stated l.106, a critical issue in MORL is that “minor [preference] variations may result in significant changes in the solution. Thus, a high level of granularity in the mesh is necessary".
To quantify the efficiency gain of RS, we now provide an analysis in Figure 2 from the one-page rebuttal pdf, where we define a new measure of success; the expected reward $E_{\hat{\mu}\sim Unif\left(0,1\right)} \hat{R_{\hat{\mu}}}$ where $\hat{R_{\hat{\mu}}} = (1-\hat{\mu})\times R_1 + \hat{\mu} \times R_2$ and the expectation is over the possible user's preferences $\hat{\mu}$. Then we compute the difference between (i) the expected reward for RS (always with $2$ training runs), and (ii) the expected reward for MORL with $M$ training runs. **Plotting this expected reward advantage for different values of $M$ confirms that MORL needs $M \gg 2$ to match RS**. Moreover, because of the dimensional curse, we expect the number of MORL trainings required to match RS to grow exponentially with the number of rewards $N$. In conclusion, these new experiments quantitatively validate that RS is more efficient than MORL, and will be included in the revised paper.
---
Rebuttal Comment 1.1:
Comment: Dear Reviewer,
before the discussion period ends, we would love to know if you had the time to read our rebuttal, and whether additional clarification is required. Thank you again for reviewing our work.
Authors. | Rebuttal 1:
Rebuttal: We sincerely thank the reviewers for their time and their insightful feedbacks. We're encouraged by the positive comments, which highlight the main features of our submission.
- (*topic*) We "address the reward misspecification problem [...] in current RLHF frameworks" (R.QPfR), a problem "that frequently arise in the emerging and important field of aligning generative models with human preferences" (R.TSwH). This "topic [is] of increasing interest and relevance to the community" (R.bXWy).
- (*methodology*) We propose rewarded soup (RS) which "involves individually training multiple networks, each assigned to a different proxy reward, and then linearly combining these networks" (R.ntSF). "The proposed idea is effective yet efficient as it does not require additional training" (R.DNE5) contrary to "to the more costly baselines" (R.bJvT) such as MORL.
- (*experiments*) Empirically, we "did a lot of experiments on different task which shows that this interpolating strategy is universal under different application scenarios, while with good performance" (R.stHc). "The paper presents interesting results for many practically relevant and useful benchmarks" (R.bXWy).
- (*theory*) "The approach is well-motivated theoretically" (R.TSwH) and "the theory part connects with experiments very well" (R.TSwH).
We have taken note of the questions and suggested weaknesses, that we directly answer in response to each reviewer.
Most of our answers are based on quotes from the main paper or the Appendix, that the reviewers might have overlooked (in particular the theoretical Appendix B.2 referenced in Section 2.2.2). In contrast, a few questions required new plots to be answered, that we gather in the one-page rebuttal pdf. Specifically:
- Table 1 shows generated samples by our interpolated models for the summarization task. This qualitative inspection is enriched by quantitative evaluations in Figure 1.a and 1.b, with general-purpose quality metrics such as perplexity for summaries and FID for image generations. We validate that the generated samples from our method do not suffer from reduced quality ([R.stHc.Q3](https://openreview.net/forum?id=lSbbC2VyCu¬eId=GlQnTpp2gl)).
- Figure 1.c and 1.d show that performances get better (Pareto-optimally) when increasing the number of averaged weights ([R.ntSF.Q2](https://openreview.net/forum?id=lSbbC2VyCu¬eId=UEM7DRMGYu)) and thus the number of training rewards ([R.QPfR.Q4](https://openreview.net/forum?id=lSbbC2VyCu¬eId=58jAIy2tXU)).
- Figure 2 quantifies the average efficiency gain from RS with regard to the MORL baseline ([R.bJvT.Q3](https://openreview.net/forum?id=lSbbC2VyCu¬eId=Hztsf2jtoo) and [R.QPfR.Q3](https://openreview.net/forum?id=lSbbC2VyCu¬eId=58jAIy2tXU)).
- Figure 3.a and 3.b plot RS's fronts over the course of fine-tuning, and validates that the LMC holds even for longer trainings ([R.TSwH.Q3](https://openreview.net/forum?id=lSbbC2VyCu¬eId=np0m5ACZtx)).
- Figure 3.c illustrates the empirical difference and the complementarity of rewarded soups and model soups ([R.DNE5.Q1](https://openreview.net/forum?id=lSbbC2VyCu¬eId=6LMJAJD6vx)).
We hope our responses clarify the expressed concerns. If there is anything else we can do to further improve our work, please let us know.
Pdf: /pdf/370ad70dcb88c5e36e8e3eb8abfefcd4291553fe.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: The authors present a new strategy to address the heterogeneity of diverse rewards in reinforcement learning. Specifically, they propose 'rewarded soup,' which involves individually training multiple networks, each assigned to a different proxy reward, and then linearly combining these networks. Compared to the multi-objective reinforcement learning baselines, the proposed rewarded soup demonstrates its superiority on several benchmarks, including text-to-text, text-to-image, and control benchmarks.
Strengths: - The authors presented a comprehensive study on the topic and the research field, as most arguments in introduction are supported by some references. The motivation is strongly supported and the path to the proposed method is reasonable.
- Presentation is clear, concise, and easy-to-understand, and the idea is simple yet effective.
- The authors conducted extensive experiments to verify the effectiveness of the proposed rewarded soup, and this includes multiple text-to-text tasks (shown in Section 3.1), image-to-text tasks (Section 3.2), text-to-image tasks (Section 3.3), and control tasks (Section 3.5). For most of the experiments, the improvement against MORL is obvious.
Weaknesses: Although there are many strengths in the paper, there is a weakness that can be further enhance the overall quality.
- Ablation studies could be added: While the authors presented many reinforcement learning applications and also showed the improvement, it would be better to include more fine-grained ablation studies, such as how the difference of rewards affects the effectiveness of MORL baseline and the rewarded soup or how the number of networks affects the results.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: - This is aligned with the weakness part. I am curious about the effectiveness of the rewarded soup under different scenarios:
1. How does the difference/gap of rewards affect the effectiveness of the MORL baseline and the rewarded soup?
2. How does the number of networks affect the results?
The reason why I am particularly interested in these two questions is that a prior work [1] indicates when the models are quite different (based on their objectives), the linear combination is likely to produce less favorable results. I am wondering about "the limit" of the rewarded soup and in what situations the proposed method might fail.
[1] "Robust fine-tuning of zero-shot models" Wortsman et al., CVPR 2022
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The authors have provided sufficient information of limitations and societal impact in the paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank R.ntSF for the deep understanding of the paper, for highlighting its strengths and for asking two intriguing questions - that we try to answer below.
---
### Q1. How does the difference/gap of rewards affect the effectiveness of the MORL baseline and the rewarded soup?
Our experiments in captioning and image generation provide empirical evidence that **the more similar the rewards, the higher the gains of RS versus MORL**.
First, in the **captioning** experiment from in Figure 3.c, by analyzing the transfer abilities across the 4 main metrics (BLEU1, BLEU4, ROUGE, and METEOR), we can deduce that:
- BLEU4 and ROUGE are very similar.
- BLEU1 and BLEU4 are more similar than BLEU1 and ROUGE.
- METEOR is an outlier, quite different from other metrics, in particular from BLEU1.
Now having set these similarities across rewards, we observe that the gains of RS versus MORL are consistent with these similarities across rewards.
Specifically,
- with $R_1=\text{BLEU4}$ and $R_2=\text{ROUGE}$, we observe large performance gains for RS versus MORL (in Figure 8.a), where the green front is highly convex far above the solution provided by the MORL objective.
- with $R_1=\text{BLEU1}$, we observe larger gains (and cleaner convexity) for RS versus MORL with $R_2=\text{BLEU4}$ (in Figure 3.b) than with $R_2=\text{ROUGE}$ (in Figure 3.a).
- with $R_1=\text{BLEU1}$ and $R_2=\text{METEOR}$, we observe better performances for MORL than for RS (in Figure 8.b).
Overall all captioning rewards remain sufficiently similar to favor RS over MORL when combining all rewards in Figure 3.c.
Similarly, in the **image generation** experiment, when we consider two (arguably similar) aesthetic rewards in Figure 5.a to fine-tune a diffusion model, RS's front is to the right and above MORL's front.
Then, when the rewards are very different or antagonist, we totally agree with your statement that "the linear combination is likely to produce less favorable results": in Appendix R.2 we include a *nsfw* reward "inversely correlated with image quality" (l.1058), and observe in this case that "MORL has higher scores than RS" (l.1056). This result "shows some limitations of weight interpolation when combining antagonist rewards" (l.1059). These insights were already briefly mentioned in the main paper (we state l.140 that "we report a few limitations in Appendix and research directions to fix them"), but will be clarified in the limitation section from the revised version. They can be explained in two different ways:
- intuitively, from a **loss landscape perspective**, weights fine-tuned on diverse rewards will be more distant, thus potentially breaking the linear mode connectivity.
- theoretically, thanks to **Lemma 3 in Appendix B.2.2**, where we bound the difference between the optimal reward and RS's reward by a RHS term growing with "the maximum of eigenvalues ratio" (l.942) across rewards' Hessians. This RHS term is illustrated in Figure 7. Then, if the rewards are more diverse, their Hessians would have more different eigenvalues, thus maximum of Hessian's eigenvalues ratio would grow, the RHS term would grow in Lemma 3, and our guarantees for the optimality of RS would get loose.
---
### Q2. How does the number of networks affect the results?
Though most of our experiments are with $N=2$ rewards and networks for visualization clarity, "RS can scale and trade-off between more rewards" (l.201).
We validate this empirically in the spider maps from Figure 3.c (for image captioning), from Figure 2.f (for text generation),, and from Figure 5.c (for visual grounding), where we uniformly combine respectively $M=5$, $M=4$, and $M=3$ networks fine-tuned on $N=M$ rewards, one reward each.
Most importantly, we **refine this analysis** for the captioning task through additional spider maps in Figure 1.c and 1.d (from the one-page rebuttal). Specifically, we compare the performances across all $N=5$ metrics when averaging $1\leq M \leq 5$ networks (each fine-tuned on one of the $N$ rewards, thus leaving out $N-M$ rewards at training) and sequentially adding more networks to the weight average. We consistently observe that **adding one additional network specialized on one additional reward extends the scope of the possible rewards that RS can tackle Pareto-optimally**.
As a short note, another possibility to scale the number of networks is at fixed number of rewards, by learning multiple networks on the same reward as in model soups (MS) [Wortsman2022].
We consider this in the Figure 3.c from the one page rebuttal for the captioning task and plot $\lambda \to \frac{1-\lambda}{2} \cdot (\theta_{BLEU1}^{v1} + \theta_{BLEU1}^{v2}) + \frac{\lambda}{2} \cdot (\theta_{ROUGE}^{v1} + \theta_{ROUGE}^{v2})$, where $\theta_{BLEU1}^{v1}$ and $\theta_{BLEU1}^{v2}$ are from two independent RL fine-tunings on BLEU1 (and similarly for ROUGE). This shows that we can combine RS and MS, i.e., $\lambda$-interpolating between two MS weights specialized on different rewards, which are themselves the averages of models fine-tuned on the same reward.
In conclusion, in all our experiments, **performances consistently increase for more networks**; when they are trained on different rewards, this reduces reward misspecification; when they are fine-tuned on the same reward, this reduces variance. This is consistent with the findings from previous works, eg, the Figure B.1 from model soups [Wortsman2022] which showed than increasing the number of averaged models consistently helps.
[Wortsman2022] Model soups: averaging weights of multiple fine-tuned models improves accuracy without increasing inference time. ICML.
---
We hope these explanations and these additional ablations clarify your questions. Please let us know if there is anything else we can do to further strengthen our submission.
---
Rebuttal Comment 1.1:
Comment: Thanks authors for providing the response, and they are convincing and informative. I think these results support the arguments and properly answer my questions. I will keep my score as weak accept. | Summary: This manuscript studies a way to interpolate trained networks' parameters for diverse rewards in a reinforcement learning manner. To be specific, the proposed method introduces a way to achieve Pareto-optimal solutions through linearly weighted parameters after training. Extensive experiments showed the effectiveness of the proposed method on various domains such as text2text, image2text, and text2image.
Strengths: - The manuscript is well-written and organized overall.
- The proposed idea is effective yet efficient as it does not require additional training.
- Extensive experimental results demonstrate the effectiveness of the proposed method on various domains; text generation, image captioning, and diffusion model.
Weaknesses: Interpolating weights for better performance is not a new concept; model soups, which is mentioned in the manuscript. However, the authors did not provide any comparison with it. When we have N fine-tuned models, rewarded soup can perform better than model soup?
For example, in the case of an image captioning task, the experimental setup assumes only two rewarded models, differently fined-tuned models on AVA and cafe datasets, respectively. I think there is no reason to hesitate to apply a way of model soup.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: I already mentioned above in the weakness section. I would like to ask the authors how the proposed method is significantly better than other weight interpolation methods like model soup, not only the reward interpolation method, MoRL.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: I think there is no potential negative societal impact.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank R.DNE5 for highlighting the organization of the paper and the diversity of our experiments.
---
### Q1. Similarity and differences with model soups
R.DNE5's main concern relates to the similarity between rewarded soups (RS) and model soups (MS).
First, we totally acknowledge similarity with MS; actually, as stated l.65, "the name rewarded soups follows the terminology of model soups".
Indeed, RS and MS both average the weights of models fine-tuned from a shared pre-trained initialization.
Yet, we want to clarify that **RS and MS tackle different problems, have different goals, leading to different methods and implementations**.
- MS challenges the standard model selection after a grid search to improve generalization in supervised learning, and aims at reducing the variance of the predictions by combining the fine-tuned models: thus MS considers different hyperparameters for a fixed training objective across runs, and (usually) uniform interpolating coefficients $\lambda=\frac{1}{M}$.
- In contrast, RS challenges single-policy approaches to improve alignment in reinforcement learning, and aims at reducing reward misspecification by revealing a Pareto front of solutions across the entire space of preferences: thus RS considers different training objectives for fixed hyperparameters across runs, and non-uniform interpolating coefficients $\lambda$ set a posteriori.
Overall, these differences mean that **RS can but MS cannot be applied to reduce reward misspecification**.
We refer R.DNE5 to the Figure 10.b from Appendix D.2 (reproduced and enriched in the Figure 3.c from the one page rebuttal). The experiments are for the captioning task, when considering BLEU1 and ROUGE as rewards. The green lines only consider one fine-tuning per reward (standard RS), while the light-blue (for BLEU1) and pink (for ROUGE) lines consider two fine-tunings on one single reward (standard MS). As stated in the legend from Figure 10.b , "it presents the fronts described when we interpolate weights fine-tuned on a shared reward, as in model soups. This also only reveals a small portion of the spectrum of preferences, validating the need of diverse rewards to satisfy all users’ preferences". Specifically, MS mostly reduces variance; in contrast, considering **weights specialized on different rewards** (as proposed in this work) is key to reveal the front across the entire space of preferences.
In summary, regarding the exact statements from R.DNE5:
- "Interpolating weights for better performance is not a new concept": indeed, but we are the first to use weight interpolation for alignment, for models RL fine-tuned with different rewards, in particular for generative and multimodal tasks.
- "the authors did not provide any comparison with model soups": actually we already did, in Figure 10.b from Appendix D.2.
- "When we have $N$ fine-tuned models, rewarded soup can perform better than model soup?": RS will be better in terms of Pareto optimality. Yet, if the true reward is available before training and thus there is no reward misspecification, fine-tuning $N$ models on this exact reward (as in MS) will certainly provide better results.
- "For example, in the case of an image captioning task, the experimental setup assumes only two rewarded models, differently fined-tuned models on AVA and cafe datasets". In the image captioning task (Section 3.2), we consider multiple metrics such as BLEU1, BLEU4, ROUGE, and METEOR. In the image generation task (Section 3.3), we consider two models fine-tuned on reward models trained on AVA and cafe datasets. In the latter case, fine-tuning multiple times on the cafe reward would fail to improve the AVA reward, as "the model $\theta_{\text{cafe}}$ performs poorly in terms of AVA" (l.245).
- "how the proposed method is significantly better than other weight interpolation methods like model soup": RS is the only weight-interpolation method seeking Pareto optimality across diverse rewards, thus the other methods will only optimize a metric given a priori, without tackling reward misspecification.
As a final note, in the Figure 3.c from the one page rebuttal, we also try to combine RS and MS, and thus plot:
$\lambda \to \frac{1-\lambda}{2} \cdot (\theta_{BLEU1}^{v1} + \theta_{BLEU1}^{v2}) + \frac{\lambda}{2} \cdot (\theta_{ROUGE}^{v1} + \theta_{ROUGE}^{v2}),$ where $\theta_{BLEU1}^{v1}$ and $\theta_{BLEU1}^{v2}$ are from two independent RL fine-tunings on BLEU1 (and similarly for ROUGE). This orange line $\lambda$-interpolates between two MS weights specialized on different rewards, which are themselves the averages of models fine-tuned on the same reward. The convexity of the interpolation and the slightly better performances at the endpoints show that **we can combine the benefits from RS (reward misspecification) and MS (variance reduction)**.
---
We hope this answer clarifies the difference between rewarded soups and model soups; we remain available for any further discussion.
---
Rebuttal Comment 1.1:
Comment: I appreciate your response. I have read the other reviews that have been posted and their corresponding author responses. While some of my concerns have been addressed, it remains uncertain whether all the concerns of the other reviewers have been resolved. As of now, I maintain my current score for this paper, but I will continue to read other reviews and the authors' responses until the end of the discussion period.
---
Reply to Comment 1.1.1:
Comment: We thank R.DNE5 for taking the time to consider and acknowledge our rebuttal. Should there be any remaining concerns that you believe warrant further attention, we would be more than happy to provide additional clarification. | Summary: This paper proposes a method of using linear interpolated weight finetuned on different rewards instead of using linear combination of rewards to finetune weight, which is a solution to applying model under different and multiple preference scenarios. The idea is intuitive but works well, it (the RS) can achieve similar performance compared with MORL, while RS reduces the computational cost significantly. The author makes some mathematical hypotheses which empirically hold when all the weight to be interpolated is finetuned from a pretrained model. Also, the author has done a lot of experiments to show the feasibility of the proposed method.
Strengths: The paper is clearly written. The experimental work is sufficiently done.
The author did a lot of experiments on different task which shows that this interpolating strategy is universal under different application scenarios, while with good performance.
The proposed method reduces the heavy computational requirements for pretuning compared with previous work, which makes it much more applicable and flexible to complex application scenarios.
Weaknesses: Novelty: The most heavy workload in this paper is applying the strategy to different tasks and testing their performance, while less novel concepts or theories is presented.
Condition for Hypothesis: The hypothesis used in the method, such as the linear mode connectivity which states that the combined reward is concave to the weights requires research on the model structure and activation function. There should be some limitations for the design of networks which ensures that the hypothesis holds.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Does the method harm the absolute quality of the produced samples? Evaluation other than reward functions should be provided.
May be you can show some extrapolated examples generated through this method.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: All right in total.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank R.stHc for the positive feedback on the clarity of our idea and the experiments. We would like to respond to R.stHc's review as follows.
---
### Q1. Novelty
Our approach is novel from two perspectives.
The first **conceptual** novelty is arguing for Pareto-optimality and a **multi-objective paradigm** to reduce **reward misspecification** when aligning deep generative models. This first novelty is critical to "handle the diversity of human preferences" (l.50), and as further detailed in Appendix A.1, to "support decision-making" (l.872), "interpretability and explainability" (l.878).
The second **empirical** novelty is proposing rewarded soups, based on new setups/conditions where the **linear mode connectivity** holds (in reinforcement learning, with diverse rewards, even in the multimodal case) and thus where weight interpolation can be used. This second novelty is critical to reduce "the computational, memory, and engineering costs involved" (l.106) in traditional MORL approaches, and as further detailed in Appendix A.2, to be "compatible with the inherent iterative engineering process of alignment" (l.890).
Moreover, we want to point out that in Appendix B.2 "we provide **theoretical** guarantees for the **near-optimality of RS** when considering quadratic rewards" (l.908), as referenced l.141-143 and l.146 in Remark 1. Specifically, in Lemma 3, we bound the reward difference between the optimal policy and our interpolated policy. We give more theoretical details in our response to [R.bXWy.Q2](https://openreview.net/forum?id=lSbbC2VyCu¬eId=rSiwrlT8Be).
---
### Q2. Limitations for the design of networks for LMC?
In our experiments, we consider different network architectures (transformers, CNNs, and MLPs), with various activation functions.
We also investigate different training procedures: with low-rank adapters, partial or end-to-end fine-tunings.
We do so for many different tasks and modalities: text generation, image captioning, image-to-test generation, visual grounding, visual question answering, etc.
Our empirical observation is that, across those setups, the **LMC is architecture-agnostic, procedure-agnostic, task-agnostic and modality-agnostic**.
The main condition we require is the shared pre-trained initialization [Neyshabur2020], as emphasized in Remark 1; this "prevents the weights from diverging" (l.145) and forces them "to remain close" (l.146).
The other condition, suggested by the literature [Li2022,Ilharco2023] and as also discussed in [R.TSwH.Q4](https://openreview.net/forum?id=lSbbC2VyCu¬eId=np0m5ACZtx), is that the architecture has enough trainable parameters. Indeed, larger networks may facilitate the orthogonality of the fine-tuned updates; then [Ilharco2023] "speculate that this [orthogonality] enables the combination of task vectors via addition with minimal interference". In conclusion, our experiments and the literature suggest **that the network design is not critical for the LMC, as long as the network is pre-trained and sufficiently parameterized**. Those constraints are arguably minimal given the predominance of the foundation model paradigm and the scaling trend in deep learning.
[Neyshabur2020] What is being transferred in transfer learning? NeurIPS.\
[Li2022] Branch-Train-Merge: Embarrassingly parallel training of expert language models.\
[Ilharco2023] Editing models with task arithmetic. ICLR.
---
### Q3. Does the method harm the absolute quality of the produced samples? show the generated samples, and provide more evaluation
**Qualitatively**, samples generated by weight interpolated models do not suffer from reduced quality. This was visible for text-to-image generation with diffusion models in Figure 12 from Appendix E.3, where we state: "we can see that all interpolated models produce images of similar quality compared to fine-tuned models". Moreover, **our anonymous website** (referenced l.856, l.1065, and l.1084), also includes generated samples for the locomotion task and for the text-to-text summarization task. For the sake of completeness, we now include **examples of generated summaries** in the Table 1 from the one-page rebuttal pdf; qualitatively, the summaries generated by interpolated models remain grammatically coherent.
To **quantitatively** validate this insight, the one-page rebuttal pdf includes new plots evaluating the samples generated by RS.
- Figure 1.a evaluates the generated summaries when $\lambda$-interpolating between two LLMs fine-tuned on two summary rewards. We leverage two text metrics; the first is (i) **perplexity** (exponentiated average NLL of the generated summaries) according to MLMS [Salazar2020] and GPT2 (following [Lee2021] and this [blog](https://huggingface.co/docs/transformers/perplexity)); the second is (ii) **quality**, as estimated by this [newspaper quality model](https://huggingface.co/valurank/distilbert-quality).
- Figure 1.b evaluates the generated images when $\lambda$-interpolating between two diffusion models fine-tuned on two aesthetic rewards. We leverage two standard image metrics; the first is (i) **FID** [Heusel2018] measuring image realism; the second is (ii) **CLIPScore** [Hessel2021] measuring image-text alignment.
In conclusion, we confirm quantitatively that **RS does not deteriorate quality**. More precisely, by interpolating the weights, we also interpolate the metrics; intermediate values of $\lambda$ even sometimes increase quality. We will detail this analysis in the revised paper, and would be pleased to include any other suggested quality metrics.
[Salazar2020] Masked Language Model Scoring. ACL.\
[Lee2021] Towards Few-Shot Fact-Checking via Perplexity. ACL.\
[Heusel2018] GANs Trained by a Two Time-Scale Update Rule Converge to a Local Nash Equilibrium. NeurIPS\
[Hessel2021] CLIPScore: A Reference-free Evaluation Metric for Image Captioning. ACL. | null | null |
Improving Language Model Negotiation with Self-Play and In-Context Learning from AI Feedback | Reject | Summary: The paper studies the ability of LLMs to improve in a negotiation game. The find that only a subset of language models can self improve from AI feedback, a model's ability to learn from feedback depends on its role in the game, and stronger agents can go through more rounds of negotiation.
Strengths: - The paper is generally well written; it was more or less easy to understand the entire paper.
- The experiment methodology is sensible. It was nice to narrow down the set of models by eliminating models based on their ability to respond to feedback or understand the problem.
- The results are interesting and could be valuable to the community on understanding the role of LLMs as agents.
Weaknesses: - It seems like incorporating feedback is done by providing the feedback as context for the LLM. The paper could be made stronger by utilizing the fine-tuning APIs for the given models.
- Using GPT3.5 as the fixed agent is interesting. It's clear GPT-4 is the strongest agent in this scenario and it'd be interesting to see how well these results would hold against a stronger agent. In a similar vein, seeing how well these results would hold when actually negotiating with a human.
- Could buyer's and seller's be prompted better? E.g. would it be possible to prompt claude instant v1.0 more effectively to respond to multi-turn negotiations or prompt Cohere command or AI21 j2 better to respond to bargaining and feedback respectively?
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Overall, I think the work is interesting, it's reasonably well designed, and has interseting implications in reinforcement learning and large language models. I think there were some decisions (e.g. incorporating feedback in the context passed to the LLM, using GPT3.5 as the fixed agent that we are buying/selling against) that make the paper's results a little less clear. However, the results are interesting and show a first step towards using LLMs as agents, in negotiations.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: The authors partially address limitations of their work, and address societal implications of their work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the comments. Below we address the following suggestions:
- “Utilizing the fine-tuning APIs for the given models”: we would very much love to do so, yet we do not have access to finetune GPT-3.5 / GPT-4 / Claude. Note that currently the general research and open-source community do not have access to finetune these models, and the current OpenAI finetune API is on GPT-3 (which is significantly weaker and does not understand the basics of the game), not GPT-3.5 / 4.
- “Seeing how well these results would hold when actually negotiating with a human”: we would very much love to do so, yet to have statistically significant distribution of deal prices we need the negotiation to be replayed 500 times — in our experiments we did more than 10k negotiation runs. This scale is too large and expensive for us to do human experiments (even just 500 runs). Yet we also view this as a demonstration of the scalability of AI feedback (then human feedback), which is an advantage of our approach.
- “Could buyer's and seller's be prompted better?”: when we started the initial experiments, we found that Claude and GPT can already follow the instructions well enough, so we did not further optimize the prompts for them. For cohere and AI21, since their model does not quite understand the rule of the game, we spent non-trivial level of efforts on prompt engineering cohere and AI21, yet not matter how we try we cannot make them understand, so we just report their failure behavior in our Figure 3.
- “Incorporating feedback in the context passed to the LLM”: There are many existing / concurrent works showing that the feedback, in the format of natural language is effective and promising, see Scheurer et. al. 2022 and Madaan. et. al. 2023. Intuitionally, as long as the model has a basic level of instruction following, it can understand and follow natural language feedback. Our results align with existing works about language feedback, and we believe that AI feedback in the format of natural language is indeed a promising direction to further explore.
- “Using GPT3.5 as the fixed agent that we are buying/selling against”: We made this decision given the limited budget constraint. Currently, GPT-3.5 is more or less the default baseline when comparing models, as is the practice in Zheng et. al. 2023 and Want et. al. 2023. So we follow this major practice. We would love to explore the comparison between more model should the budget permit.
**References**
Bai et. al. 2022. Constitutional AI: Harmlessness from AI Feedback
Scheurer et. al. 2022. Training Language Models with Language Feedback
Madaan. et. al. 2023. Self-Refine: Iterative Refinement with Self-Feedback
Zheng et. al. 2023. Judging LLM-as-a-judge with MT-Bench and Chatbot Arena
Want et. al. 2023. How Far Can Camels Go? Exploring the State of Instruction Tuning on Open Resources | Summary: This paper investigates the intriguing possibility of autonomous improvement among multiple large language models through a negotiation game. By assigning various LLMs to distinct roles and allowing them to engage in iterative improvement, the paper aims to enhance their negotiation strategies without human intervention. The study uncovers some insights into negotiation problems, encompassing the assessment of model capabilities and their responses to AI feedback. However, after careful evaluation, I am inclined to reject this paper due to its limited contribution and insufficient experimental results, which fail to adequately demonstrate the effectiveness of its findings.
Strengths: This paper studies an interesting problem: Improving large language models with each other and demanding only black-box access is a promising direction.
This paper is largely clear and concise. It is easy to follow the problem setting and the negotiation process.
This paper compares capabilities (especially the continue learning ability using in-context learning) of some advanced large language models in the proposed negotiation problem, which is interesting.
Weaknesses: One of the main technical novelties is the AI feedback technique used in their method. However, this technique seems like a result of random attempts plus some intuition. More detailed though on how did the authors develop this technique or the comparison between other possible candidate techniques is needed. Moreover, this technique is similar to CoT. It would be better for the authors to discuss on the relationship between these two techniques. I am curious about the performance of guiding LLMs to think step by step without relying on additional critics in negotiation problems. More thorough explanations or experimental results of this aspect would provide deeper insights into the effectiveness of the proposed approach.
This paper aims to improve the ability of large language model by self-play with each other. However, to strengthen the paper's claims, it would be better to provide additional evidence regarding the transferability of the proposed framework to different types of games. Suggestions on the specific implementation process would also be valuable in applying this framework to different domains.
The stability of the environment is also a concern since the results appear to heavily rely on the reliability of the moderator. This dependence raises doubts regarding the individual contributions of various components in the system. Furthermore, the claim of proposing a technique for prompt optimization for generic classification tasks in Line 120 lacks sufficient details and evidence. Elaborating on this technique and demonstrating its effectiveness would enhance the paper's credibility and address this specific weakness.
It is not clear whether there exists an upper limit of improvement using ICL. It would be better for the authors to also discuss ICL with, e.g., fine-tuning or other trainable parts.
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: What is the exact definition of "self-play"? It seems that only one of the two players improves in the setting of this paper, which slightly different previous definition in, e.g., https://arxiv.org/pdf/2002.04017.pdf.
Line 59: It would be beneficial to provide additional details, possibly with references, comparing the effectiveness of using AI feedback with RHLF. This comparison would contribute to a more comprehensive understanding of the proposed approach's advantages and distinguish it from existing techniques.
Line 85: Why the LM engine behind the critic is consistently the same as the player it provides feedback.
Figure 4: It lacks the experimental results of Claude-v1.3 and GPT-4 as buyers. Why these scenarios are omitted? Including these results in the figure would ensure a complete representation of the experiments conducted.
Presentation:
Line 118, 119: inspect -> inspecting, add -> adding
Line 120: recommend -> and recommending
Line 211: comparison -> comparisons
Figure 5: whild -> while
Ling 257: use less -> uses fewer
Line 258: serve -> serves
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 3 good
Contribution: 1 poor
Limitations: The authors discuss the limitations of the work in the last section.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the comments. Below are our responses:
## Significance of AI feedback and our contribution
Our systematic investigation aims to answer the research question of whether multiple Large Language Models (LLMs) can improve each other in a negotiation game with minimal human intervention, and the negotiation game framework and the methods used in the game are designed to achieve this purpose.
At the first glance, the settings of this work may look superficially easy (for real human), but are indeed nontrivial for models. This is why much of our effort is spent on designing the basic configurations of our game playing. Our method systematically converge to better results with quantitative evidence and qualitative examples, not just random attempts (as is demonstrated in our figure 4,5 and 6). The core contribution of this work is showing that LLMs, when strong and aligned enough, would be possible to continuously improve through self-play and AI feedback, as is summarized in our Figure 1C and detailed in our Figure 6. Qualitative examples about how the model improve over multiple rounds of game playing and AI feedback is shown in our Figure 7.
## Comparisons between in-context learning from AI feedback to other related approaches
**In-context learning v.s. finetuning v.s. RL**
- The reviewer asks us to “discuss ICL with, e.g., fine-tuning or other trainable parts” also “comparing the effectiveness of using AI feedback with RHLF”. Also we very much love to experiment SFT and RL, we do not have any finetuning / RL access to GPT and Claude.
- In general, the current conclusion (from the collective experiences of the community) of the three learning paradigm is that ICL is cheaper than supervised finetuning (SFT) and cheaper than RL, which is why the current major paradigm of interacting with LLMs is ICL.
- Currently, only very few big companies have the resource to do large scale finetuning and RLHF, and their results are that all three paradigms improve the model. The current major practice is always start from ICL to study what prompt data is effective, then apply SFT then RL (see Touvron et. al. 2023). Although ICL may not have the same improvements as SFT and RL, the improvements from ICL can still be nontrivial given the correct prompt (as is shown in our Figure 7) and the data used in ICL is usually also effective when used for SFT (e.g., Fu. et. al. 2023).
**AI feedback v.s. chain-of-thought**
- We find it very hard to understand why the review believe AI feedback is similar to chain-of-thought. To the best of knowledge, also we believe that the majority opinion of the research community is that these two are parallel ideas and most existing work only focus on one of them, like AI-feedback-only in Bai et. al. 2022 and Scheurer et. al. 2022, or chain-of-thought-only in Wei et. al. 2022. There is very recent work that tries to combine the two technique together like Madaan. et. al. 2023, but they treat the two technique complementary, rather than one as the alternative to the other.
- In our case, we could also combine the two technique together, like having the player to think about what to say before actually saying it, then have the critic to provide feedback to the player’s internal thinking process. We believe this is an interesting direction to explore and will try it in the next followup work.
## Other important concerns
**Implementation and transferability of our experiments**
- We will open source all our prompts and codes to support research in the direction of game playing and AI feedback.
- Further about the applicability and effectiveness of AI feedback on other domains, we note that there are concurrent work showing in-context learning from AI feedback is also effective for reasoning (Madaan. et. al. 2023) and factuality (Du el. al. 2023). Combining our results with these two concurrent works, we believe that AI feedback is indeed an effective and promising direction for improving LLMs.
**Claude v1.3 and GPT-4 as buyers**
- The performance of Claude v1.3 and GPT-4 as buyers are shown Figure 6, B1, where the GPT-4 buyer can consistently improve over multiple rounds, and the Claude-v1.3 buyer improves until the third round. The price distribution of these two engines are similar to the distribution in Figure 4 and 5, so we omit them to save space.
**Why using the same engine for the critic and the player**
- Since the overall study of game playing and AI feedback in the research community is still in a relatively early stage, an important objective of this work is to set up the basic experimental settings before pushing it more complex, which requires us to focus on the major factor we want to study (which is AI feedback) while keeping other factors minimal and constant. This is why we use the same engine for the critic and the player to keep the setting simple. We will try to more settings if budget permitts?
**References**
Fu et. al. 2023. Specializing Smaller Language Models towards Multi-Step Reasoning
Yuan et. al. 2023. Scaling Relationship on Learning Mathematical Reasoning with Large Language Models
Touvron et. al. 2023. Llama 2: Open Foundation and Fine-Tuned Chat Models
Bai et. al. 2022. Constitutional AI: Harmlessness from AI Feedback
Scheurer et. al. 2022. Training Language Models with Language Feedback
Wei et. al. 2022. Chain-of-Thought Prompting Elicits Reasoning in Large Language Models
Madaan. et. al. 2023. Self-Refine: Iterative Refinement with Self-Feedback
Du el. al. 2023. Improving Factuality and Reasoning in Language Models through Multiagent Debate
Gao et. al. 2022. Scaling Laws for Reward Model Overoptimization
---
Rebuttal Comment 1.1:
Title: Response to the rebuttal
Comment: Thanks to the authors for the detailed response. I appreciate the additional clarifications from the authors. | Summary: This paper studies whether multiple large language models (LLMs) can improve each other in a negotiation game by playing, reflecting, and criticizing, with minimal human intervention. Two LLMs play the roles of a seller and a buyer, and a third LLM plays the role of a critic who provides feedback to one of the players to improve their negotiation strategy. The authors report several intriguing findings, such as the different abilities and behaviors of various LLMs in the game, the trade-off between deal price and success rate, and the evidence of improved language complexity and strategy from iterative AI feedback.
Strengths: 1. Interaction between LLMs is quite interesting and potentially important for future AI research.
2. The authors experimented with a variety of LLMs, including open-source (Cohere) and proprietary (GPT, Claude) ones.
Weaknesses: Major Issue:
1. The negotiation setting is quite contrived and not well-grounded. Specifically, there is no context about the goods being discussed. Also, the feedback and overall conversation seems very generic (Figure 2). There is no value/intrinsic motivation for the buyer to get the goods, and there is only one choice to go for. Typically, negotiation in real-life doesn't happen this way.
2. It feels like the current LLM negotiation setting can be viewed as `predicting the most probable outcome/text' given the negotiation contexts.
3. AI feedback is not the same as human feedback, the behavior may be completely different in real-world situations since there are a lot of other concerns including 'value', 'time', 'personal preference' for negotiation. This makes me wonder what this simple negotiation has to offer in terms of understanding/future research.
Minor Issue:
1. The title is a bit misleading. It seems to me from the title that the paper is about an algorithm, but it is more about evaluations and understanding.
[After Rebuttal]
My concern still lies in the 'motivation' behind the setting for the experiment. In my understanding, negotiation requires a setting, e.g., both seller and buyer will have a stake on the product (buyer want to pay lower than the *utility* of the product and seller want to earn more than the *cost*). However, such basic setting is not present in the experiments.
This makes me concerned if the observed effect is really negotiation or just the effect of instruct fine-tuning.
I decided not to change my score.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. I wonder why LLAMA based models are not considered in this work.
2. Is it possible to have both the seller and buyer receive feedback in a single experiment? I wonder what will happen.
3. Is it possible to add some story/context to the negotiation so that the LLMs have a better motivation?
4. Is the best policy to just always be fixed at starting price and ask LLM to generate some excuse (Prompt: "Give me some sentences to sell my __ for the highest possible price.")?
[After Rebuttal]
All questions have been addressed.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: I was unable to find any information related to limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the comments. The reviewer is mostly concerned the design of our game setting and asks if our setting can be extended add more factors like “context” “motivation”, “personal perference” and so on.
We would like to note that, although we very much love to study how these factors influence the behavior of language models, we need to set up the basics before pursuing more complex settings. The focus of this work is to study whether it is possible that LLMs can improve each other through AI feedback **while keeping other orthogonal factors minimal and constant**. Specifically:
## We need to set up the basics before adding more factors
The reviewer suggested to add more:
- “context about the goods”
- “value / intrinsic motivation”
- “value”, “time”, “personal preference”
- “story / context”
These are all great direction that we very much love to explore. However, practically, there are two important constraints that require us to set up the basics first, specifically:
- At the current stage, some model engines cannot even understand the basic commonsense like \\$8 is a lower price than \\$10 (Figure 3). **How can we ask them to incorporate complex ideas like “value” “personal preference”, “motivation” when they don’t even understand the very basics like \\$8 is lower than \\$10**?
- In our experimental setup, we carefully disentangle / rule out factors that is orthogonal to the primary factor (AI feedback) we study. This work’s goal is to study how AI feedback can improve the deal price while **controlling other confounding factors to be minimal and constant**. This is why we intentionally set other factors to be simple and minimal, such that we can focus on how different LLMs incorporate AI feedbacks.
Overall, we would love to test all the factors suggested by the reviewer in the next follow up work, yet for this work, we need to carefully set up the minimal basics (which is already quite nontrivial) before making it more complicated (which is meaningful, but orthogonal to our focus).
## Other important concerns
“why LLAMA based models are not considered in this work”
Because when we started this project, there was no good LLaMA based model that could understand the basic rule of bargaining and make similar mistakes shown in our Figure 3. This is also why we choose the four model families discussed in Figure 1C.
“Is it possible to have both the seller and buyer receive feedback in a single experiment?”
Yes, we did that at the initial runs, but we observed the deal price will be fixed at $15 because the two players improves simultaneously, making it hard to evaluate the effectiveness of AI feedback. Setting one player fixed while the other receiving feedback enable us to evaluate the progress using the deal price.
“Is the best policy to just always be fixed at starting price and ask LLM to generate some excuse (Prompt: "Give me some sentences to sell my __ for the highest possible price.")?”
We are not sure what the reviewer means here. In our practice:
- We always set the initial starting price for the seller to be \\$20 and for the buyer to be \\$10, such that we can control the price range to be within [10, 20], thus being able to measure the progress using the final deal price. If this is what the review mean by “always be fixed at starting price” then we have already done it.
- Our initial prompt to the seller involves “The cost of your balloon is \\$COST_PRICE and your starting price is \\$SELLER_INIT_PRICE. Your goal is to sell it to a high price.” if this is what the reviewer mean by “Prompt: Give me some sentences to sell my __ for the highest possible price.” then we have already done it at the very beginning. Also note that the AI feedback, when applied to strong and well aligned agents like GPT-4, will continuously improve the deal price over this initial prompt, suggesting simply asking the model to “sell it to a high price” is not the optimal strategy.
- Note that for the seller, never lowering the price is not the best strategy because it often fails to reach any deal -- as is shown in our Figure 6, selling too hard comes with a higher risk of breaking the deal.
---
Rebuttal Comment 1.1:
Comment: Thank you for the detailed responses. Overall, I think the setting of the experiments is not solid enough:
1. Use of GPT-3.5/GPT-4 as evaluator. If GPT-3 makes a mistake then the results may not be accurate.
2. Meaningless negotiation. There is no context for the LM seller for the 'cost' of the good and no context for the buyer for the 'utility' in the economic model. There is no practical reason where the price should be set. This makes the task more like some NLP benchmark.
3. Limited LLM formats. Different LLMs may behave differently under different prompts and different LLMs may also behave differently. Therefore, I'm concerned how instructive this work would be for future works.
Overall, I would retain my score for the paper. Please let me know if there are any further misunderstandings.
---
Reply to Comment 1.1.1:
Comment: We thank the reviewer for the response. We would like to further clarify certain misunderstandings:
- **We do not use GPT-3 as evaluator**: there is no “GPT-3” in this paper — all experiments we use GPT-**3.5** and GPT-**4**. We use the **final deal price as the evaluation** (see our Figures 2, 4, 5, 6), which is a standard practice of AI bargaining research (e.g., see Heddaya et. al. 2023). We are well aware of how GPT-based evaluation can be biased (e.g., see Wang et. al. 2023), this is why we choose the evaluation to be the final deal price, as it is more objective.
- **Our experiments clearly show how the negotiation improves in multiple meaningful ways**. The value of the product is discussed multiple times during negotiation, and we give many examples in the paper (e.g., Figure 2C: “high quality latex and handcrafted by expert artisan”; Figure 7 “made from durable material”)
- **The prompts, especially after AI feedback, is of very high diversity.** We note that one should not view the initial instruction as the fixed prompt (we keep the initial instruction fixed because we need to use the same way to explain the rule of bargaining to different agents), but should **view the [initial instruction + round 1 dialog + round 1 AI feedback + round 2 dialog + round 2 AI feedback … ] collectively** **as the prompt** for the next round negotiation. To give a sense of how diverse the prompts are, in Figure 6, we run the game 5 rounds, and repeat 500 times, which in total gives 2500 different types of prompts. Because our prompts are so diverse, we observe multiple meaningful behaviors and a wide range of deal price distributions. We show some of the examples in Figure 1B where we sample four types of AI feedback, as the updated prompt for the player, and how it improves players’ strategy.
References
Heddaya et. al. 2023. Language of Bargaining
Wang et. al. 2023. How Far Can Camels Go? Exploring the State of Instruction Tuning on Open Resources | Summary: This paper studies the strategic multi-agent problem setting of two LLM Players interacting in a negotiation (or bargaining) game and proposes to use feedback from an LLM Critic to improve each Player’s expected behavior and performance in the game. Importantly, the paper aims to study how AI Feedback can enable Player improvement when playing competitive games under well-defined rules. The proposed method is to use the player dialog history and critic feedback as “in-context demonstrations” for players to self-improve over the course of the game. Experiments are conducted on a bargaining game instance negotiating the price of a balloon, investigating several LLMs as base models for the Players (and their Critics).
Strengths: Identified strengths of paper:
- I really like the premise of this work: that agents playing in a strategic game may each be able to benefit and improve their negotiation strategies, not only by observing their opponents, but additionally by heeding the advice of personalized critics whose aim is to help (cooperate with) them. Like other AI feedback methods, this type of approach has the potential to scale better than using Human feedback for the same type of assistance, presuming feedback is helpful and thus desirable for this negotiation task.
- The topic is timely and very relevant to the multi-agent and game theoretic communities, as it takes a long-existing and well-studied problem (multi-agent negotiation) and investigates how rational player strategies can be improved using AI feedback and LLMs.
Weaknesses: Identified concerns and suggestions to improve the paper submission:
- It would be extremely helpful to see examples of what the *instructions* and *in-context learning* looks like for the Critic agents? Examples of their few-shot prompts (or instruction data for Finetuning) are important, as this is a key contribution of the work (adding AI Feedback to strategic reasoning/negotiation settings). Notably, where are the Buyer/Seller Critics getting the suggested negotiation strategies in Figure 1B from (e.g. the flinch technique, the anchoring technique, etc)? They seem to be good, general strategies to have in the player’s toolbox, but require the Critic to have knowledge of effective negotiations and how/when to employ them. Thus, how are the Critics coming up with general negotiation strategies and reasoning about when to apply them? This is important for opening the black box of the AI Feedback component of the proposed method.
- While I like the overall premise, the novel technical contribution of this work seems tenuous. With that, the problem setting is strongly inspired by AlphaGo Zero, but is there any traditional learning in this setting (i.e. any updates to the agent policies based upon the AI feedback given)? That is still unclear to me. Currently the paper seems to simply add a small amount of additional context in the prompt for the Player LLMs. If there is no traditional learning, this setting is also critically different from AlphaGo Zero in that the proposed feedback signal (from Critic Agents) is **not** used to update the Player policies. This should be explicitly clarified. Also, if there are model updates, it would be useful to see the learning update rules. If not, why not do any finetuning of the model? At least as an experimental condition to compare against and understand if in-context learning is "sufficient". Perhaps more motivation and context for this design decision would be helpful.
- This work explores a potentially interesting research direction (negotiation games or strategic gameplay more generally + AI Feedback for improved player strategies) but it’s not clear to me how well motivated AI feedback is for this negotiation game or how interesting the problem is in its current instantiation. In particular, it would be helpful if the paper provided some type of performance analysis to show that the negotiation game used (an instance of balloon purchasing) is interesting and challenging to solve *before* ever adding in LLMs or a Critic. How would existing multi-agent negotiation/bargaining approaches solve this game? What equilibria do the Players generally converge to without LLMs? How and why does using LLMs change the *expected* solution? Given two Player LLMs, why is a Critic *necessary* to have? In other words, if the Critic is being prompted with examples of bargaining conversations, could the Buyer/Seller agents not simply see those same examples and then converge upon the same solution (they are currently finding) *without* a Critic? Or is there some value that the Critic role *uniquely* adds (e.g. more efficient or robust convergence on an equilibrium in this negotiation game)? Motivating the game selected as an interesting and challenging problem in its own right is critical.
- Furthermore, I still question the generality and significance of the paper’s empirical findings. **RE Significance:** How significant and meaningful are the differences in Seller performance in Table 1? The numbers don’t seem that different to me, but perhaps with more context, the significance becomes more clear. With that, I see a distributional shift in Figure 4, but how meaningful is this shift? What are the implications of it, regarding how good or bad these solutions/equilibria are before and after receiving AI feedback? Do the solutions to the negotiation game simply change a small amount but are comparable in terms of how preferable/desirable they are or do they get qualitatively better in some way? **RE Generality:** The paper seems to use only one evaluation domain (balloon purchasing). It is not clear to me which empirical findings/trends from the use of AI Feedback on this one, seemingly simple bargaining domain, are expected to generalize. In particular, regarding the effectiveness and impact of AI Feedback in other multi-agent negotiation domains and settings. This limited evaluation seems more like an interesting case study than general findings that transfer.
- In Subsection 4.3, the paper provides analysis of each of the LLM models (gpt-3.5, gpt-4, claude-v1.0, etc) versus a fixed gpt-3.5-turbo opponent. However, a more thorough investigation examining the cross product of each LLM model as Buyer against every other LLM model as Seller (a *complete* Payoff matrix with all pairwise model comparison) could potentially provide more general insights. Why was this not done? Can it still be done to show a resulting Payoff Matrix? Not doing a pairwise comparison of *all* pairs of models might unnecessarily constrain the results and thus the insights that can be extracted.
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: Please see Weaknesses section.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 2 fair
Limitations: Please see Weaknesses section.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Overall, the design decisions that we make are closely based on our understanding of what the current LLMs can do, and how to push it further. We aim to clarify the following points:
- **Policy updates by In-context learning from AI feedback**: the reviewer discussed that our approach “simply add small amount of additional context” and “the feedback signal is not used to update the policy”. We tend to believe this is an important misunderstanding and beg to differ: it is precisely the small amount of AI feedback, used as the new instructions to the model, that changes the model’s policy drastically (given the model is strong and aligned enough), as is show quantitatively in our Figures 4,5,6, and qualitatively in Figure 7. Similar observations are made in Madaan et. al. 2023 and Shinn et. al. 2023.
- **Why not doing fine-tuning and what about without LLM**: unfortunately, our setting is actually very challenging for models weaker than Claude-instant, as is shown in our Figure 3. We have also tried some open-source models, or methods before LLM, yet they struggle to understand the very basic common sense like \\$17 is a higher price than \\$15. One important message from this work is the model has to pass a certain bar to even play the game. At the current stage of research, we tend to believe only the GPT and Claude family can play our game and improve over rounds, which we do not have access to finetune.
- **How the critic can come up with suggestions**: In general, if the model engine is better than or equal to Claude-instant, such as GPT-3.5 and Claude-1.3 (Figure 1C), it has no difficulty writing meaningful critics (which is also the observation of Madaan et. al. 2023 and Bai et. al. 2022). How inside the neural network the model figures out a feedback is still a challenging research problem, and our observation is that the model usually write critics according to the context. For example, if it observe the player commits to a price too early in the previous round, it tends to ask it to stand firm in the next round.
We further clarify how AI feedback updates the player’s policy, specifically:
- **Evidence about player’s policy updates from AI feedback**: when the model has a certain level of ability to understand the follow instructions, it will adjust its strategy based on the critic’s suggestion. One direct evidence is figure 7, where after rounds of AI feedback, the player becomes more eloquent, word-tuned, strategic with the starting price, and emphasizing the product quality.
- **The AI feedback, as the prompt to the model, is the key to trigger updated policy**: In this setting, the prompt is extremely important and far more than just “small additional context”. Figure 2 is an example/ evidence where the model respond to the suggestions by the critic where the critic gives three suggestions and the player follows two of them.
- **In-context learning (ICL) is nontraditional but effective and widely deployed**: our setting can be viewed as using previous round dialog history + AI feedback as in-context learning demonstrations (given the model has the ability to follow AI feedback). ICL is indeed different than traditional gradient-based finetuning, but nowadays it is a widely deployed learning paradigm that can be effectively used for modifying model policy/ behavior. The mechanism of ICL is still an open research problem, and there are more evidence showing its ICL is equivalent to implicit gradient descent (Oswald et. al. 2023, Dai et. al. 2023) — from this perspective, our approach can be understood as using the in-context demonstration data to “finetune” the model such that it has better bargaining strategy in the next round of negotiation.
The reviewer also has concerns about the significance and generality about the experimental results. To respond this,
- For each setting of the game we repeat it 500 times to ensure the deltas and distributional shifts are statistically significant and consistent.
- We note that the model’s behavior change will have a distribution shift over one round of the game (Figure 4) where they immediately use the strategies suggested by the critic (Figure 1B and 2).
- The bargaining policy change will become more prominent in a multi-round game setting (Figure 5, 6), and qualitatively will become more and more word-tuned and strategic, as is shown in Figure 7.
Further we address the reviewers’ other important concerns
- **Why only comparing to GPT-3.5 turbo:** Because GPT-3.5 is nowadays the default baseline for comparing language models, as adopted by many works like Zheng et. al. 2023 and Touvron 2023. Comparing all pairs of models are just too expensive for us to run (the GPT-4 experiments already cost a fortune). That being said, we agree that a payoff matrix is indeed meaningful to compare multiple models, which we will add in the updated version of our paper.
**References**:
Madaan et. al. 2023. Self-Refine: Iterative Refinement with Self-Feedback.
Bai et. al. 2022. Constitutional AI: Harmlessness from AI Feedback.
Shinn et. al. 2023. Reflexion: Language Agents with Verbal Reinforcement Learning
Zheng et. al. 2023. Judging LLM-as-a-judge with MT-Bench and Chatbot Arena
Touvron et. al. 2023. Llama 2: Open Foundation and Fine-Tuned Chat Models
Oswald et. al. 2023. Transformers learn in-context by gradient descent
Dai et. al. 2023. Why Can GPT Learn In-Context? Language Models Implicitly Perform Gradient Descent as Meta-Optimizers | null | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Masked Space-Time Hash Encoding for Efficient Dynamic Scene Reconstruction | Accept (spotlight) | Summary: This paper proposes the masked space-time hash encoding to efficiently reconstruct dynamic scenes. The insights behind the paper is that: most part of the scene is static, simply modelling such static parts can dramatically increase the probability of hash collisions; while increasing the hash table entries requires more memory. To solve these issues, this paper proposes to decouple the scene into a static part and a dynamic part, where the two parts are jointly learned through an uncertainty mask. The uncertainty mask is modeled by an individual voxel grid, on which each voxel stores an value of the uncertainty field. To bridge the gap between uncertainties and masks, a neural estimator is adopted to approximate the mutual information. Experiments are conducted on the Plenoptic Video Dataset, the Google Immersive Dataset, and the self-collected time-synchronized multi-view videos. As shown in the quantitative results, MSTH surpasses state-of-the-art dynamic NeRF methods (HexPlane, K-Planes, etc) in terms of both the reconstruction quality and training/rendering time. Ablation studies also shows the effectiveness of the proposed masked space-time hash encoding.
Strengths: This is a technically very solid paper. Compared to previous methods, the proposed method can reconstruct dynamic scenes with higher quality while highly reduced the training time. I like the idea of using uncertainty to mask the static part and dynamic part in a scene instead of simply a combination of 3D voxel grid and a 4D voxel grid. The experiments are exhaustive and highly support the effectiveness of the proposed method.
Weaknesses: (1) In Figure 3 (c), it seems some static parts have high uncertainty values (the top-left part and the middle-right part)?
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: - (1) Is Figure 3(c) revealing that the uncertainty does not accurately model the static part and dynamic part?
- (2) In Section 4.3, the authors proposed two variants for ablation study: (1) A pure 4D hash encoding. (2) A simple decomposition which is an addition of a 3D hash table and a 4D hash table. Actually, for the second variant, I'm not quite sure whether the authors split the scene into a static part or a dynamic part using additional models (for example, using the **segment anything model (SAM)** to split a non-rigid object/human from the static background), and then separately model them. If it is not, I wonder would it better to separately model the dynamic part and static part with some pre-computed masks instead of jointly learn the uncertainty during training?
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations: The authors address the major limitations of existing dynamic NeRF methods, e.g. the efficiency and rendering quality. Some other limitations, such as reconstructing monocular dynamic scenes, blurry scenes, etc, are mentioned in the paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > Q1: Is Figure 3 \(c\) revealing that the uncertainty does not accurately model the static part and dynamic part?
The dynamic region inferred by the model may include some noise induced by algorithm-irrelevant reasons, such as inaccurate estimated camera poses and parameters, lack of key points when the scene contains little high-frequency information and etc. In our experiment, the upper-left part of the scene could not be well reconstructed even by exploiting per-frame static nerf algorithms like Instant-NGP. As a result, we consider this as a data intrinsic noise that has little impact on the visual quality.
> Q2: In Section 4.3, the authors proposed two variants for ablation study: (1) A pure 4D hash encoding. (2) A simple decomposition which is an addition of a 3D hash table and a 4D hash table. Actually, for the second variant, I'm not quite sure whether the authors split the scene into a static part or a dynamic part using additional models (for example, using the segment anything model (SAM) to split a non-rigid object/human from the static background), and then separately model them. If it is not, I wonder would it better to separately model the dynamic part and static part with some pre-computed masks instead of jointly learn the uncertainty during training?
For the variant "A simple decomposition which is an addition of a 3D hash table and a 4D hash table" in the ablation study part, we did not apply any off-the-shelf segmentation model to split the dynamic part from the static part. In this ablation study, the separation of the static and dynamic part is learned implicitly through reconstruction loss without the guidance of uncertainty, which results in sub-optimal performance as shown in Fig 6 in the paper, and proves the effectiveness of the proposed uncertainty estimation.
Incorporating segmentation models in the separation of static and dynamic regions introduces a strong prior that the dynamic region consists of semantically-grounded objects. Besides, the identification of dynamic objects requires manual specification. This assumption is not reasonable in many situations. For example, in the *flame salmon* scene in the Plenoptic dataset, some foreground objects (including the cups, the table and etc.) are actually static. Apart from that, some dynamic regions may not manifest within the vocabulary of a particular segmentation algorithm. Even if the dynamic part can be roughly captured, the concrete bounds cannot be well inferred without other guidance (e.g. the hands and head of the cooking man in the *flame salmon* scene), and tracking the dynamic objects in multiple videos also introduces extra complexity.
In contrast, our solution provides a more general framework that requires no prior on the dynamic scenes, in which the granularity of dynamic region is automatically determined under the guidance of uncertainty.
---
Rebuttal Comment 1.1:
Comment: Dear Reviewer,
Since the discussion with authors is closing soon, could you please go over the reviews and rebuttals, and respond to the content of the authors response with a message to the authors (you can post with one message summarizing all such reviews). It is important that authors receive a reply to their rebuttals, as they have tried to address comments raised by the reviewers.
-AC
---
Rebuttal Comment 1.2:
Title: Thanks for the rebuttal
Comment: Thanks to the authors for the rebuttal. All of my concerns are addressed. Then I decided to keep my current rating for this paper. | Summary: The paper presents MSTH, a method that efficiently reconstructs dynamic 3D scenes from multi-view or monocular videos. The proposed solution uses a space-time hash encoding, a prediction of masks, and uncertainty values that help a method to identify 3D points that belong to moving objects. The intuition is that this masking and uncertainty will help a method identify static points which are easy to reconstruct but also handle well the moving points. Lastly, the formulation of the method allows it to render in fast manner also preserving the size of the model. The paper presents experiments on three real-datasets: Plenoptic Video Dataset, Google Immersive Dataset, and a newly introduced dataset. The presented experiments show that the MSTH improves the PSNR while keeping both the training time and inference time short as well as maintaining a constant size model.
Strengths: In sum, I find the paper to be well executed and explained. Overall, I think the problem of reconstructing dynamic scenes is a challenging but important problem to solve. In detail here are the strengths I find:
S1. I find the use of masking to identify the static and dynamic points an interesting and simple idea. I think the idea has good intuitive rationale and it should be effective.
S2. The inclusion of uncertainty in the formulation I think is an important example of why uncertainty is very important in machine learning. I support the idea of combining uncertainty with the mask prediction as a mean to improve results.
S3. The experiments use real-data and challenging scenes to showcase the benefits of the proposed approach. Overall, I think the experiments are well executed and highlight the benefits of the MSTH.
S4. The clarity and description of MSTH is good. Overall, I think the proposed method should be easy to reproduced given the level of clarity of the narrative.
Weaknesses: Overall, I find the paper to be well executed and clear. I only have one weakness to point out:
W1. I think the paper is lacking a more in-depth discussion about the newly created dataset to test MSTH. Whil I understand that the algorithm is important, I also think the data part is as important as the algorithm. At the end of the day, the data is required and crucial to solve many problems in the era of deep learning. Thus, I suggest that a final version of the paper includes an extended discussion of the dataset curated by the authors.
While I like the approach and explanation of MSTH, I do have some minor concerns:
W2. The uncertainty described in the method in a way breaks with the intuitive understanding that an uncertainty value corresponds to a confidence value ranging from 0-1 and could be interpreted as a probability of correctness. However, the "uncertainty" introduced in Section 3.3 deviates from this common understanding. I think the paper should explicitly state a concrete meaning of uncertainty to clarify that the common understanding of uncertainty does not apply to this narrative. The main reason I bring this up is because I think this can confuse readers in the future.
W3. In several parts of the narrative (e.g., lines 188 and 186) the paper uses the term "dynamics of a point". I find this term to be confusing since in many fields (e.g., robotics or control) the dynamics typically refer to a model that describes how the state of something changes over time. I think the paper should clarify what this exactly means to clear any misconception of this term.
W4. Missing definition of different terms: 1) $\tilde{m}(\cdot)$ is never defined in the text; 2) what does "large $m$" means in line 168? Isn't $m$ the learned mask? Or is it a value?
----
Post Rebuttal
After the rebuttal, all my concerns were clarified and I still think this is a good contribution. Thus, I maintain my rating.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: Please see Weaknesses section.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 3 good
Limitations: I think the paper clearly states the limitations of the approach.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > W1: I think the paper is lacking a more in-depth discussion about the newly created dataset to test MSTH. While I understand that the algorithm is important, I also think the data part is as important as the algorithm. At the end of the day, the data is required and crucial to solve many problems in the era of deep learning. Thus, I suggest that a final version of the paper includes an extended discussion of the dataset curated by the authors.
Due to the line limit for the submission, we only provide a concise description of the proposed Campus dataset in the main text. A more detailed introduction and configuration can be found in the Appendix. We would provide a more comprehensive explication of the dataset within the main body.
> W2: The uncertainty described in the method in a way breaks with the intuitive understanding that an uncertainty value corresponds to a confidence value ranging from 0-1 and could be interpreted as a probability of correctness. However, the "uncertainty" introduced in Section 3.3 deviates from this common understanding. I think the paper should explicitly state a concrete meaning of uncertainty to clarify that the common understanding of uncertainty does not apply to this narrative. The main reason I bring this up is because I think this can confuse readers in the future.
The uncertainty we used in this paper is derived from the *Aleatoric Uncertainty* proposed in [1] which model the intrinsic noise of the observed data by Gaussian distributions. The uncertainty $U(\cdot)$ stands for the learned standard deviation of the underlying Gaussian which is unbounded by its definition. The uncertainty augmented loss would assign a relatively large uncertainty value on those spatial points with high time variance which helps the model in the separation of static and dynamic regions.
[1] Kendall, Alex, and Yarin Gal. "What uncertainties do we need in Bayesian deep learning for computer vision?." Advances in neural information processing systems 30 (2017).
> W3: In several parts of the narrative (e.g., lines 188 and 186) the paper uses the term "dynamics of a point". I find this term to be confusing since in many fields (e.g., robotics or control) the dynamics typically refer to a model that describes how the state of something changes over time. I think the paper should clarify what this exactly means to clear any misconception of this term.
We thank the reviewer for pointing out the confusion arising from the lack of clarity in our submission. In the context of our paper, "dynamics of a point"
to how frequently the state of density and color changes through time, e.g, a more dynamic point will have more density and color states through time, which can be reflected by the uncertainty. In contrast, the traditional understanding of dynamics within robotics and control refers to a model explicitly formulated and constructed by human intervention, which is devised to capture and predict the behavior of an object. In response to the concerns raised regarding potentially misleading representations, we assert our commitment to rectify ambiguities and refine the clarity of terminology in the refined version of our paper.
> W4: Missing definition of different terms: 1) $\tilde{m}(\cdot)$ is never defined in the text; 2) what does $m$ means in line 168? Isn't the learned mask? Or is it a value?
We thank the reviewer for pointing out the potential missing definitions. The symbol $\tilde{m}$ in Eq(2) refers to the unactivated mask value, namely the raw output of the mask branch of the neural network. This $\tilde{m}(\cdot)$ is converted to the final mask value by $\text{sigmoid}$ function, as shown in Eq(2).
The $m$ in line 168 stands for the same meaning as that in other equations, namely the corresponding mask value. Our symbol system is consistent in the paper, the $m$ is firstly defined in Eq(2) and retains the same meaning in line 168.
---
Rebuttal Comment 1.1:
Title: RE: Rebuttal by Authors
Comment: Thanks for clarifying. Please make sure these clarifications are discussed and included in a final manuscript. | Summary: This paper present a method for efficient dynamic scene reconstruction. They represent the dynamic scene with a weighted combination of a 3D hash encoding (for static part) and a 4D hash encoding (for dynamic region). The weight is learnable and can be represented by a multi-resolution hash table or a 3D voxel grid. For each query 3D point, they can then get a weighted feature encoding from the hash tables. The photometric loss is then used to supervise the training.
To better supervise the learnable mask/weight representation, they further propose an uncertainty guided mask learning and exploit the mutual correlation between "3D point uncertainty" and "3D point mask" to supervise the learning of the mask representation.
The experimental evaluations are conducted on two public datasets and a self-collected dataset. The experimental results demonstrate the effectiveness of the proposed method.
Strengths: 1: The paper presents a novel method to improve both the reconstruction quality and efficiency for dynamic scenes. The experimentral results demonstrate its effectiveness.
2: The paper is well written.
Weaknesses: I do not find severe flaws of the paper and would like to discuss with other reviewers if necessary.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: The mask representation is not time dependent and the uncertainty supervision is based on the inconsistency between the rendered images & the real captured images. I have following two questions:
1: Since the 3D scene is dynamic, the mask(static/dynamic) for each 3D position should also depend on time. Why you choose to use a time-independent representation.
2: Is it like that as long as the 3D point being occupied by a dynamic object (even for partial frames), it will also be predicted as dynamic (which would have a high uncertainty from Eq. 7)? If so, will your method be able to work well if the dynamic object occupy a larger portion of the image and sweep over most of the 3D space across time. To what extent it can perform well?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: If Question 2 is right, please address the corresponding limitation in Section 5 of the main paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > Q1: Since the 3D scene is dynamic, the mask(static/dynamic) for each 3D position should also depend on time. Why you choose to use a time-independent representation.
We design the time-independent mask mainly due to the inference efficiency. Specifically, we find the efficiency bottleneck of the hash encoder is mainly from the hash mapping (instead of the mlp due to the fast tiny-cuda-nn implementation). And for the hash mapping, the bottleneck is the memory access time. For a 3D mask, the hash mapping requires to access eight times of memory for a trilinear interpolation. If we use a 4D time-dependent mask, the hash mapping requires to access 16 times of memory for a 4-linear interpolation, which makes the inference time much slower than 3D masks. Besides, due to the extra time dimension, a 4D mask is also memory-inefficient while the performances we evaluated are similar with a 3D mask.
Concretely, we also conduct experiments to compare the time-dependent and time-independent versions of masks. We compare them in the following table (on the Plenoptic Video dataset).
| Metrics | PSNR | LPIPS | FPS | Memory |
|:-------:|:----:|:-----:|:---:|:------:|
| 3D Mask | 33.1 | 0.051 | 15 | 135M |
| 4D Mask | 32.7 | 0.053 | 9 | 183M |
> Q2: Is it like that as long as the 3D point being occupied by a dynamic object (even for partial frames), it will also be predicted as dynamic (which would have a high uncertainty from Eq. 7)? If so, will your method be able to work well if the dynamic object occupy a larger portion of the image and sweep over most of the 3D space across time. To what extent it can perform well?
In our implementation, if the dynamics of the underlying scene are of great complexity, our solution is to allocate more space for dynamic part by using a larger hash table. The main problem of the 4D mask is the inference efficiency which is mentioned in Q1.
We collect some complex multi-view videos in our proposed Campus dataset which exhibits long and large dynamic areas in our supplemental video. The proposed MSTH demonstrates commendable performance in the aforementioned scenes, despite the utilization of a limited-size dynamic hash table. This observation substantiates the algorithm's efficacy in handling scenes that encompass significant dynamic areas.
For most routine scenes, the 3D mask is enough for effectiveness and efficiency. We also believe the 4D time-dependent mask will be more memory efficient for extremely complex scenes, e.g., the dynamic areas are very large. We believe the more fine-grained 4D masks are a better solution in this situation for memory efficiency. We will also explore this situation more and sincerely thank the reviewer for giving this valuable feedback.
---
Rebuttal Comment 1.1:
Comment: Dear Reviewer,
Since the discussion with authors is closing soon, could you please go over the reviews and rebuttals, and respond to the content of the authors response with a message to the authors (you can post with one message summarizing all such reviews). It is important that authors receive a reply to their rebuttals, as they have tried to address comments raised by the reviewers.
-AC
---
Rebuttal Comment 1.2:
Comment: Thanks for the clarifications. It is much clearer now. I would keep my original rating. | Summary: This paper tries to address the challenge of efficiently representing 3D dynamic scenes. It proposes a decoupled representation that uses separate neural implicit representations for dynamic and static 3D points. This approach can reduce hash collision and save the storage of the multi-level hash feature grids.
Strengths: 1. An automatic learning procedure to detect dynamic points in the 3D scene. This is achieved by integrating Bayesian learning to treat dynamic points as noise and reduce their weights in the photo-consistency loss.
2. A separate 3D and 4D hash feature grids to save the storage.
3. The optimization of mutual information between uncertainty and mask is demonstrated to be beneficial to the rendering quality.
Weaknesses: 1. The concept of representing scenes with separate static and dynamic parts has been investigated in NeRFPlayer or NeRF in the wild. Integrating Bayesian learning to predict the dynamic points is also not new.
2. Lack quantitative ablation study on the mutual information optimization, which weaken the contribution of this step.
3. Inadequate references. There are missing references related to neural implicit representation for large scale and dynamic scenes:
Xiuchao Wu et al., Scalable Neural Indoor Scene Rendering, SIGGRAPH 2022
Haithem Turki et al., Mega-NERF: Scalable Construction of Large-Scale NeRFs for Virtual Fly-Throughs, CVPR 2022
Liao Wagn et al., Fourier PlenOctrees for Dynamic Radiance Field Rendering in Real-time, CVPR 2022 Oral
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: In Fig.6, what is the meaning of each row?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 2 fair
Contribution: 2 fair
Limitations: The proposed method is most effective for scenes with a significant amount of static parts. For scenes with a small amount of static parts, the storage savings will be less significant.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Weaknesses:
1. >1.1.The concept of representing scenes with separate static and dynamic parts has been investigated in NeRFPlayer or NeRF in the wild.
Although the abstract concept of separating static and dynamic is not novel, many existing methods like NeRFPlayer, NeRFW, and MixVoxels adopt this strategy. We emphasize that our separation differs from previous approaches both in purpose (compact storage) and means (estimated by uncertainty). Our separation is soft which could help use very small hash tables to represent large numbers of spatiotemporal voxels without disastrous hash collisions, which improve the memory and rendering efficiency. In contrast, NeRFPlayer used three branches to ensemble the results for improving quality, while their design can not improve either memory or rendering efficiency. We summarize the differences in the following Table.
| | NeRFPlayer | NeRF-W | MSTH(Ours) |
|:---------------:|:-----------------:|:-------------:|:------------------------------:|
| Purpose | improve quality | remove noises | compactness (small hash table) |
| How to separate | learnable network | uncertainty | uncertainty + MI maximization |
> 1.2 Integrating Bayesian learning to predict the dynamic points is also not new.
We indeed take inspiration from the successful work of NeRF-W. However,
NeRF-W and our method aim to solve different problems, NeRF-W use uncertainty to reduce the impact of transient objects (noise), while they do not consider the reconstruction of transient objects. Instead, we treat the dynamic point as noise and they are also required to be reconstructed. With this important difference, we are required to make the uncertainty represent the dynamics through mutual information maximization.
> 2. Lack quantitative ablation study on the mutual information optimization, which weaken the contribution of this step.
In Table 1 in the appendix, we ablate the Mutual information optimization by substituting the mutual information term with other descriptors of correlation including the Pearson correlation coefficient and a hard-coded linear correlation. Results demonstrate the effectiveness of the proposed Mutual information correlation. For clarity, we paste the comparison in the following table.
| Correlation | PSNR$\uparrow$ | LPIPS$\downarrow$ |
|:-----------:|:--------------:|:-----------------:|
| Mask | 29.64 | 0.087 |
| Pearson | 29.87 | 0.107 |
| Hard-coded | 28.74 | 0.124 |
| **MI** | **29.93** | **0.063** |
> 3. Inadequate references. There are missing references related to neural implicit representation for large scale and dynamic scenes: Xiuchao Wu et al., Scalable Neural Indoor Scene Rendering, SIGGRAPH 2022 Haithem Turki et al., Mega-NERF: Scalable Construction of Large-Scale NeRFs for Virtual Fly-Throughs, CVPR 2022 Liao Wagn et al., Fourier PlenOctrees for Dynamic Radiance Field Rendering in Real-time, CVPR 2022 Oral
For the missing references, they are indeed related and we will include them in the related work. (Fourier PlenOctrees [85] is already included in our draft)
---
Rebuttal Comment 1.1:
Comment: Dear Reviewer,
Since the discussion with authors is closing soon, could you please go over the reviews and rebuttals, and respond to the content of the authors response with a message to the authors (you can post with one message summarizing all such reviews). It is important that authors receive a reply to their rebuttals, as they have tried to address comments raised by the reviewers.
-AC | null | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Extensible Prompts for Language Models on Zero-shot Language Style Customization | Accept (poster) | Summary: This paper introduces a solution called eXtensible Prompt (X-Prompt), which enables instructing a Language Model (LM) using imaginary words. These words serve the purpose of providing instructions to the LM that are difficult to articulate using natural language. In order to prevent overfitting of the LM and facilitate its generalization to out-of-distribution examples, the authors propose two strategies: Template Augmentation and Context-Augmented Learning. Through a series of experiments, the authors evaluate the ability of X-Prompt to generate suitable content in the style of a specific individual, as well as its capacity for zero-shot style transfer generation. The results, both quantitative and qualitative, demonstrate the efficacy of X-Prompt and the utility of context-augmented learning.
Strengths: 1. The paper introduces X-Prompt as an extension of the soft prompt method, offering notable advantages over previous approaches. X-Prompt exhibits enhanced flexibility by enabling its application to out-of-distribution examples, thereby providing significant adaptability. Its novelty lies in its minimal learning cost, in contrast to the substantial efforts required for LLM pretraining.
2. The training methodology, referred to as context-augmented learning, harnesses the capabilities of LLMs to generate novel contexts. This cost-effective approach facilitates the generation of additional training examples and can be potentially applied to various prompt learning experiments, showcasing its versatility.
3. The paper is commendably well-written, featuring informative diagrams that enhance comprehension of the proposed method. Moreover, the inclusion of numerous example prompts and the corresponding generated content greatly enhances the readability of the paper.
4. The experiments conducted are comprehensive, incorporating quantitative metrics such as perplexity and next-token accuracy, as well as qualitative assessments involving human annotators. This multifaceted evaluation methodology ensures a thorough evaluation of the proposed approach.
Weaknesses: 1. Lack of novelty: The paper is an extension to the soft prompt solution. Except for that this paper uses LLMs while most of the previous focuses on models like BERT, the paper improves the training strategy by introducing the method of context-augmented learning. The contribution alone might not be significant enough for a paper at NeurIPS.
2. Limited Experiment Scope: Although labeled as an "extensible prompt," the paper primarily focuses on style transfer generation tasks. It would be valuable for the authors to broaden the experimental scope to encompass additional tasks, akin to the approach taken in the prefix-tuning paper (Li and Liang, 2021). This expansion would further showcase the versatility and potential of the proposed method.
3. Model Performance Concerns: While the experiments understandably revolve around adhering to specific styles, there may be implications for content faithfulness with the use of X-Prompt. Table 7 indicates that the content score of X-Prompt falls noticeably below that of natural language, albeit demonstrating significant improvement over the soft prompt method. The evaluation of content faithfulness in other experiments, such as open-ended generation, remains absent. Lower perplexity may stem from the chosen style rather than from generating appropriate content. More evidence is needed to evaluate the effectiveness of the imaginary tokens.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: 1. Missing reference: [Learning How to Ask: Querying LMs with Mixtures of Soft Prompts](https://aclanthology.org/2021.naacl-main.410) (Qin & Eisner, NAACL 2021). I think the idea of this paper is pertinent to the idea of X-Prompt and it is the origin of the term "soft prompt".
2. In section 3.1.1, you mention that 5% of the users are used for validation and test, but you only have 20 authors. Does it mean that 1 user is used for validation and 1 user is used for test? To strengthen the experimental findings and reduce variance, it would be beneficial to include a larger number of users in the evaluation process.
3. Can you give more details regarding the "prompt engineering" section 2.2.1? How do you generate more prompts, and how many?
4. I am curious if the prompt tuning is properly trained to be a baseline. In table 12, the training and inference prompt for the "prompt tuning" method is different. Will this hurt the performance of the soft prompt? Will it be possible to make the training and inference consistent?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations: The paper should address a potential limitation concerning the experiments on open-ended generation. The successful generation of content imitating the style of a specific individual raises concerns regarding the potential for generating deceptive or fake statements. If the generated content closely resembles the authentic statements of the person, it could be challenging to distinguish between the two.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We would like to express our appreciation for your recognition of our work, as well as your constructive feedback and thought-provoking questions. We hope that our following responses will help you better interpret the merits and contributions of our paper, and further improve your impression and evaluation of our paper:
> Weakness 1 and 2: Novelty and Scope
Our primary contribution is the X-Prompt idea, which provides an interface for large language models (LLMs) to include new information and knowledge by compressing indescribable information into a discrete symbol (i.e., imaginary word) that is OOD robust and possesses strong zero-shot compositional ability. We successfully apply this approach to zero-shot language customization, which, to the best of our knowledge, is **among the earliest work addressing LLM customization—an important challenge regarded as the first step towards personalized LLMs and AI agents that may be the most crucial research problems in the new era of AI**. In this regard, our work's originality, scope, and pioneering nature are highly consistent with submissions encouraged by NeurIPS.
As for technical novelty, although our method (learning imaginary tokens represented by continuous vectors via gradient descent) shares some similarities with soft prompts, our proposed innovations (e.g., content-augmented learning) **enable the learned imaginary words to have far superior OOD robustness compared to soft prompts**. This essential difference allows X-Prompt to have **completely different use and application purposes** compared to traditional soft prompts, as the free combination of imaginary words and natural language prompts provides a novel interface for LLM, significantly enhancing LLM's expressive capabilities, **which is a critical ability that soft prompts fail to provide.**
The motivation of our work is different from prefix-tuning. Compared with Prefix-tuning that studies in-domain (ID) learning and does not involve OOD evaluation, we focus on zero-shot language customization, a very important and emerging research problem. In contrast to prefix-tuning, where ID evaluation is easily performed with many existing datasets and evaluation methods, **our research emphasizes OOD compositional evaluation, where both the available evaluation methods and resources are extremely limited**. Therefore, it is almost impossible for us to perform OOD evaluations as extensively as ID evaluations (like conducted in the PrefixTuning paper), because as seen in our paper, a single task's OOD evaluation is already very challenging and reaching the limit within the limited space of this paper. However, **we believe our work represents the most comprehensive empirical study within the scope of zero-shot language customization to date.**
> Weakness 3: The evaluation of content faithfulness in other experiments, such as open-ended generation, remains absent
As you point out, PPL and next word accuracy cannot accurately reflect content faithfulness. That's why we performed human evaluation in Table 7 and **it is exactly the content faithfulness evaluation for open-ended generation**. We kindly ask you to confirm this.
By the way, in our follow-up experiments (conducted after the full paper submission date), we use the GPT-4 to help evaluate the content faithfulness with the prompt ```Please evaluate whether the following text follows the instruction: {prompt}; If it follows the instruction, please rate 1; otherwise, rate 0.```:
| Method | Content Faithfulness |
| :------|:-----:|
| NL | 0.79 |
| Prompt tuning | 0.28 |
| X-Prompt | 0.74|
The scores by the GPT-4 and the human evaluation results are highly consistent (Pearson correlation score r=0.72), indicating that the content faithfulness evaluation is reliable. We'll include the new result in the revised version.
As for the content score of X-Prompt falling below that of natural language, **it is not a surprise because natural language words' OOD robustness is the strongest**. The in-depth reason is that natural language words are all pretrained (with the LLM) on trillions of tokens, seeing sufficiently diverse contexts, which results in their unparalleled generalization/OOD compositional capabilities. While our X-Prompts' OOD robustness is much better than soft prompts, it is still far from natural language words given the limited training FLOPs of imaginary words.
> Missing reference: Learning How to Ask: Querying LMs with Mixtures of Soft Prompts (Qin & Eisner, NAACL 2021).
We appreciate your suggestion, and we will include a comparison and discussion of this related work in our revised version.
> In section 3.1.1, you mention that 5% of the users are used for validation and test, but you only have 20 authors. Does it mean that 1 user is used for validation and 1 user is used for test?
We apologize for the confusion. What we mean is that for each user, we used 5% of their tweets as a test set, not 5% of the users as a test set.
> Can you give more details regarding the "prompt engineering" section 2.2.1? How do you generate more prompts, and how many?
We used a combination of rephrasing and manual brainstorming to generate diverse prompts.
> I am curious if the prompt tuning is properly trained to be a baseline. In table 12, the training and inference prompt for the "prompt tuning" method is different. Will this hurt the performance of the soft prompt? Will it be possible to make the training and inference consistent?
If training and test prompts are the same, it is the **In-distribution (ID)** evaluation setting. However, as we highlight our contribution in the paper, this work focuses on **OOD language customization** (i.e., prompts at test time are unseen during training), which is precisely what X-prompt can do well but soft prompt cannot.
---
Rebuttal Comment 1.1:
Comment: Sorry for overlooking the human evaluation. I thought the "content" score meant the quality of the content; but yes, it is related to faithfulness.
For the novelty point, I still believe that adapting the prompt tuning methods to a new model (LLMs) or a new task (personalized generation) does not suffice for a high-quality machine learning paper. A new idea will be appreciated. But the overall quality of this paper is good so I will keep my score as borderline accept.
---
Reply to Comment 1.1.1:
Comment: Dear reviewer,
Thank you for reading our response and confirming the details.
For the novelty, as we introduce in the abstract section, this paper studies prompting a large language model beyond natural language and explores a novel paradigm of prompting an LLM with **a (zero-shot and OOD robust) combination of NL words and imaginary words**. As far as we know, this is a novel paradigm that little previous research studies.
As our response mentions, OOD evaluation in this work is much more challenging than ID evaluations and it is unlikely to be so extensive as evaluations in a paper that studies ID evaluations because an ideal evaluation should satisfy the following three criteria:
1) It is hard to elaborate the task with natural language (NL) words.
2) It is an important topic with a relatively clear definition.
3) There are available resources (data/method) that facilitate evaluations.
We choose to use style-related tasks as **a case study** in this paper because they meet most of the above three criteria. However, we don't mean that our X-Prompt can work only for these tasks; instead, we think **it is a general approach to empower zero-shot combination of NL words and imaginary words for prompting LLMs**.
We fully understand your desire for our paper to be as perfect as possible. However, we also hope you can acknowledge the various challenges we face when initiating the research of the novel paradigm, especially in the evaluation aspect and that it is unrealistic to resolve all of them **within a single conference paper**. We wish for this paper to be accepted as **a starting point of exploration in this paradigm**, which would spark further research studying zero-shot combination of NL words and newly registered new words (i.e., imaginary words). | Summary: This paper proposes X-prompt: a technique that learns an imaginary token to represent a concept that is hard to describe in natural language. Compared to soft prompt tuning, X-prompt is designed to be OOD robust with template and content augmentation, in which the X-token is trained with various prompt templates and examples of different topic keyword to prevent overfitting. The author quantitatively and qualitatively demonstrated the advantage of X-prompt over prompt tuning on styled text generation and style transfer.
Strengths: 1. Compared to soft prompt tuning, X-prompt is OOD robust as shown intuitively on Table 2. The secret source is context augmented learning (CAL) which involves augmented templates and content (topic keywords). CAL is logically reasonable and feasible, which is very effective generalising X-prompt to OOD as shown in Table 6
2. Table 11 shows that imaginary tokens of X-prompt trained for a specific task (styled generation for training), can be reused to support other tasks via different natural language instruction (style transfer for inference). This shows imaginary tokens learned from X-prompt can interact with natural language tokens. As such, X-prompt can be potentially utilised for compositional usage.
Weaknesses: 1. Missing baseline: for table 6, the baseline of using NL instruction is missing. As X-prompt's core objective is to learn the concept which is hard to describe in NL, it is desirable to compare with baselines which use NL descriptions. For example, a baseline which puts some example tweets of a specific user as in-context prompt, can be compared. In the same spirit, for qualitative evaluation (Table 7&8), there should be one more NL baseline which prompts the LLM as "Criticise the C++ language in Donald Trump's style".
2. The qualitative evaluation (Table 7 & 8) raises concerns as the styles chosen are from well-known characters (Trump, Satya, Sheldon). This is problematic as the LLM already learned about their styles during pretraining, and such knowledge might be well associated with their names already. For example, the following is the result I get from ChatGPT with query "Praise the C++ language in Donald Trump's style": "The C++ language, folks, let me tell you, it's tremendous. Absolutely tremendous. It's a beautiful, beautiful language.".
3. Following point 2, I think the evaluation well-verified the advantage of X-prompt over soft prompt tuning. However, there lacks robust and detailed comparison with baselines which use descriptions in natural language for styled generation experiment.
I'm willing to reconsider score if the above three concerns are addressed.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. The style transfer experiment shows X-prompt is much better than NL baseline (Table 11, first row). I wonder how the NL baseline scales with model size (for example, does NL baseline on 175B OPT yields comparable results to X-prompt on 6.7B), and model type (e.g. NL baseline on instruction-tuned model like llama or even ChatGPT, vs X-prompt). This will help people understand the best use cases of X-prompt over NL instructions.
2. Do you think X-prompt can be utilised compositionally: an instruction involving multiple imaginary words?
3. Have you tried multiple tokens per imaginary word? Is 1 token the best setting?
4. I notice the lr 2e-4 is much smaller than the original prompt tuning paper[1], which was 1e-5. Do you find 2e-4 work better for both PT and X-Prompt for the experiments?
[1] Brian Lester, Rami Al-Rfou, and Noah Constant. 2021. The power of scale for parameter-efficient prompt tuning. arXiv preprint arXiv:2104.08691.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The author has adequately discussed the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your recognition of our work, your constructive feedback and thought-provoking questions. We hope our following responses will help you better interpret our contributions, and further improve your impression and evaluation of our paper:
> Missing baseline: for table 6, the baseline of using NL instruction is missing ...
**The "8-shot" and "32-shot" in Table 6** are the baseline methods placing example tweets from a specific user as in-context prompts, which are exactly what you suggested.
> The qualitative evaluation (Table 7 & 8) raises concerns as the styles chosen are from well-known characters (Trump, Satya, Sheldon)
We also considered this problem when we designed the experiments. However, for the new task of zero-shot language customization, the available evaluation data and methods are very limited. While we tried to evaluate the results as comprehensively as possible, we still can't use the ideal evaluation protocol to evaluate the results.
Ideally, we should use language styles that the LLM hasn't seen before for qualitative evaluation to fully demonstrate that our X-prompt can describe styles that cannot be prompted by NL. However, **there is a dilemma**: unfamous characters the LLM doesn't know (e.g., my language style) are difficult for the annotators to judge their styles; famous characters with distinct styles are easier for annotators but their styles can be prompted with natural language. We finally chose famous people for qualitative evaluation because: a) we believe the reliability of annotations is more important; b) though we used well-known characters, X-Prompt does not rely on any celebrity-related information, meaning it is character-independent and can be applied to any character (analogous to a language-independent method that can be applied to any language), which can also be supported by the quantitative evaluation results (Table 6), indicating X-prompt can achieve good OOD results for arbitrary users.
In our follow-up experiments, we supplemented the missing part of the evaluation: we evaluated writing styles the LLM (i.e., OPT-6.7B) doesn't know. **We use a senior Chinese media professional -- Hu Xijin whose writing style is distinctive and has always been popular among Chinese netizens for imitation -- as an example**. We translated his 1500 tweets into English with the GPT-4, retaining his writing style as much as possible. 1 example tweet:
*``` I saw several groups chatting with ChatGPT and making fun of old Hu, and some even predicted that artificial intelligence will make old Hu unemployed. Haha, artificial intelligence pushes everything into a super digital mode, and whoever has the greatest computing power is the king, just like the ever-escalating battle between missiles and anti-missiles. But old Hu is like an everlasting 155mm howitzer, simple, not relying on any trendy stuff. I wish you all not to be "buried alive" by artificial intelligence and become the survivors who crawl out of the "pit of ten thousand people." ```*
| Method | Content | Style | Overall |
| :------|:-----:|:-----------:|:-----:|
| NL |**0.86** | 0.22 | 0.18 |
| NL (with the prompt word "Hu Xijin's style") | 0.61 | 0.20 | 0.15 |
| Prompt tuning | 0.27 | **0.75** | 0.22 |
| X-Prompt | 0.64 | 0.72 | **0.58** |
As shown in the above table, adding the prompt words "Hu Xijin's style" doesn't improve the OPT-6.7B's result, **demonstrating that OPT-6.7B is unaware of Hu Xijin's style. However, X-prompt still achieved excellent results, confirming it's not affected by whether the person is famous or not.**
> there lacks robust and detailed comparison with baselines which use descriptions in natural language for styled generation experiment.
We'll update the above result evaluating Hu Xijin's style, which includes the comparison with NL prompts in our revised version.
> To Question 1
As discussed earlier, our style transfer experiments should ideally involve transferring between unnamable styles. However, we can't find existing datasets in practice that support this ideal evaluation protocol.
For the ease of evaluation, we choose to conduct style transfer experiments with formality and politeness because they have readily available datasets (GYAFC/PoliteRewrite) and reliable evaluation methods, which can objectively reflect the generation quality.
Similar to our previous discussions, our approach is style-independent, though our evaluation is based on experiments with well-defined styles.
We believe the best use cases of X-prompt over NL instructions are, as emphasized in our paper, still for representing what NL prompts hardly describe (e.g., unnamable language style customization). But please understand directly evaluating in these scenarios is very difficult in practice.
> To Question 2
It's a very good question. We initially designed X-prompt with the aim of having strong compositional ability for imaginary words (similar to newly created words). The compositional strength depends largely on X-prompt learning, with CAL playing a crucial role. **Once the imaginary words have seen sufficiently diverse contexts during training (like a natural language word seeing various contexts through large-scale pre-training.), they'll have the similar compositional ability as natural language words.**
> To Question 3
We show the results of using different lengths of imaginary words in **Figure 1 of the supplementary material**. For language style customization, we found using more than 1 token can improve ID performance, but it doesn't help much with OOD. This may be because 1 token is basically sufficient for expressing a specific style.
> To Question 4
2e-4 empirically works well in our setting. We think [1]'s learning rate is larger because **(1)** it uses a different optimizer -- Adafactor, not Adam as in our paper; **(2)** its base model is the T5, not GPT/OPT in our paper. **(3)** its studied tasks are almost NLU tasks, not generation tasks as in our paper.
---
Rebuttal Comment 1.1:
Title: Response to Author
Comment: Thank you for providing detailed responses to each of my questions. I have read the responses and the other reviewers' comments. Despite of some reasonable critics, I still believe this paper has some meaningful contribution by showing that x-prompt is OOD robust and can be repurposed for new tasks in inference. Such technique can be potentially extended to compositional use, as previous papers[1,2] have shown that soft prompts can be compositionally combined.
As such, I will raise my recommendation score to 7 to give this paper a chance for getting accepted.
[1] Tu Vu, Aditya Barua, Brian Lester, Daniel Cer, Mohit Iyyer, and Noah Constant. 2022. Overcoming Catastrophic Forgetting in Zero-Shot Cross-Lingual Generation. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pages 9279–9300, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics.
[2] Hailin Chen, Amrita Saha, Shafiq Joty, and Steven C.H. Hoi. 2022. Learning Label Modular Prompts for Text Classification in the Wild. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pages 1677–1690, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics.
---
Reply to Comment 1.1.1:
Comment: Dear reviewer,
Thank you for raising your score and acknowledging our contribution. We appreciate your understanding of the significance in the compositional use of NL words and soft prompts. Your valuable suggestions are greatly appreciated, and we will follow your advice to further improve our paper. | Summary: This paper proposes eXtensible Prompt (X-Prompt), a new way to prompt large language models beyond natural language. With an extensible vocabulary of imaginary words, X-Prompt allows for more descriptive prompts and is designed to be out-of-distribution robust. The paper also proposes context-augmented learning (CAL) to learn imaginary words for general usability. Experiments with OPT-6B reveal some effectiveness of the method.
Strengths: 1. The paper is well-written and easy to understand.
2. The idea of the X-Prompt is interesting. The use of imaginary words in prompts is an innovative idea that allows for more descriptive prompts and is designed to be out-of-distribution robust.
Weaknesses: 1. The idea is interesting but the method is not novel. The training way of X-Prompt is similar to continuous prompt learning. And why only use 1 token to learn the imaginary word?
2. The paper lacks a sufficient number of baselines for comparison and utilizes a limited set of datasets. The “Prompt tuning”(maybe) and X-prompt method are fine-tuned, which is unfair to compare with “No prompt” or “32-shot”.
3. In Table 11, there is a difference in the input format of “Prompt tuning” in the train stage and inference stage. I don’t think [SOFT] can be used directly in the NL.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: What is “Prompt Learning” means in Section 3? Discrete or continuous? I do not find any description in the paper (Maybe I missed it)
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: Yes, the authors discuss the computing resource limitations of the paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We would like to express our appreciation for your constructive comments. We hope that our following responses will help you better interpret the merits and contributions of our paper, and further improve your impression and evaluation of our paper:
> Weakness 1. The idea is interesting but the method is not novel. The training way of X-Prompt is similar to continuous prompt learning. And why only use 1 token to learn the imaginary word?
We believe that continuous prompt learning (by gradient descent) is **a general methodology category**, and **we don't think our work's affiliation with this category negates its novelty**. Like other excellent research in this category, our work features several innovations to address the issues that traditional soft prompts cannot solve effectively, such as zero-shot NL/soft prompt combination for language customization. The nature of X-Prompt is registering new words for the LLM and allowing them to be used as natural language words with strong OOD compositional ability. **This essential difference allows X-Prompt to have completely different use and application purposes compared to traditional soft prompts**, as the free combination of imaginary words and natural language prompts provides a novel interface for LLM, significantly enhancing LLM's expressive capabilities, which is a critical ability that soft prompts fail to provide. We hope you will reconsider evaluating our novelty and contribution based on these innovations.
As for why we only use one token to learn the imaginary word for customizing language style in our work, please refer to **Figure 1 in our supplementary material** which has demonstrated that using more imaginary tokens can increase ID performance (overfitting), but it doesn't show a significant positive effect on OOD performance. We infer that the information content of a specific language style is not very large, and a single imaginary token can already represent it well. Therefore, we use only one imaginary token to represent the style in our paper. However, if we represent more complicated knowledge or events (e.g., the World Cup 2022), we agree with your point that using a single imaginary token might not be enough, which we plan to explore in future work.
> Weakness 2. The paper lacks a sufficient number of baselines for comparison and utilizes a limited set of datasets. The “Prompt tuning”(maybe) and X-prompt method are fine-tuned, which is unfair to compare with “No prompt” or “32-shot”.
Please understand that zero-shot language customization itself is a very new research problem, and to the best of our knowledge, there is not much previous work that formally studies this challenge. As a result, there are very few applicable baselines and datasets available for evaluation. **We believe we have made our best effort to conduct a comprehensive empirical study of this problem from various aspects, and our work represents the most comprehensive empirical study within the scope of zero-shot language customization to date.**
As for "No prompt" and "32-shot" baselines, we would like to reiterate that **we included these results to help everyone interpret the empirical results on this new challenge and demonstrate that X-Prompt is capable of achieving what NL prompts struggle with**. If we do not include these baselines' results, readers and reviewers would be left wondering whether our method is indeed better than NL prompts (See Reviewer GeEV who mistakenly thought we didn't have these NL baselines and suggested we should add them).
> Weakness 3. In Table 11, there is a difference in the input format of “Prompt tuning” in the train stage and inference stage. I don’t think [SOFT] can be used directly in the NL.
In Table 11, you are correct that there is a difference in the input format of "Prompt tuning" in the train stage and inference stage. **This is precisely the challenge our work aims to address: OOD robustness -- the ability to work even for prompt templates that are unseen during training.**
Table 11 demonstrates that our approach can handle this issue while prompt tuning cannot, highlighting the essential difference between our work and traditional continuous prompt learning.
> What is “Prompt Learning” means in Section 3? Discrete or continuous? I do not find any description in the paper (Maybe I missed it)
Learned imaginary words are similar to natural language words (i.e., discrete symbols), and their embeddings are continuous vectors. As we mentioned above, learning imaginary words can be regarded as registering new words for the LLM. We'll make it clearer in our revised version.
---
Rebuttal Comment 1.1:
Title: Response to authors
Comment: Thank you for your detailed responses, which addressed part of my concerns. So I raise my score.
---
Reply to Comment 1.1.1:
Comment: Thank you for raising your score and acknowledging our contributions. We appreciate your valuable suggestions that are helpful to improve this work. | Summary: This paper presents several data augmentation methods for training prompts that include both frozen text and learnable soft tokens so that they are still effective on out-of-domain examples.
Strengths: The keyword extraction method to create text-prompts that are informative to the current example as a method for reducing how much information the model needs to fit in the soft prompt to perform well on the current task is a good ideas that makes a lot of sense.
Care was taken in creation of the dataset including things like removal of overlapping prompts/keywords from training and testing.
Weaknesses: The proposed X-Prompt approach of combining soft and text prompts is not novel, Gu et al., 2021 https://arxiv.org/abs/2109.04332 and Wei et al., 2022 https://arxiv.org/abs/2109.01652 both touch on how the combination of text and soft prompts can result in differences in performance. Thus this paper would be much stronger if it was framed as the first deep dive into the interaction between text and soft prompts with the novelty coming from the data-augmentation that enables more robust OOD performance. These paper should be mentioned in the related work. The template augmentation approach is similar to the multiple prompts using in papers like Flan (see above) and T0 (https://arxiv.org/abs/2110.08207).
The prose makes assertions about the performance of soft prompts in the OOD setting is poor without citations. Several papers (https://arxiv.org/abs/2111.06719, https://arxiv.org/abs/2110.07904, https://arxiv.org/abs/2208.05577) have confirmed that it is hard to use soft-prompts in out of domain settings and should be cited.
The prose redefines in-distribution (ID) multiple times (not a weakness, just feedback that doesn't fit anywhere else)
Differences in performance seem rather small making it difficult to trust the results without some way to capture variance. For example, In the OOD accuracy in Table 6, the difference between Prompt tuning and X-Prompt is 0.6. With the test split being 5% of 52,541 this means X-Prompt only gets 16 extra example correct. Given the variance in prompt tuning from Lester et al., (2021), it seems like this could be within noise.
The performance of X-Prompt in Table 7 isn't very convincing, it is the strongest in "overall" but that seems to be an artifact of the other methods only being good in one category while X-Prompt is ok in both. It doesn't seems like it actually the best option.
Rather than framing issues using in-distribution vs out-of-distribution (which seems overly broad) it seems like this paper would be stronger if it was framed to be about task-signaling. It seems that prompts do poorly on a new task because the prompt was the only way to signal to the model to do a new task. Therefore when applied in a new setting it still signals for the model to do that original task. This can be seen as a type of overfitting, but the task-signaling framing makes it clearer why X-Prompt is a good idea.
Technical Quality: 3 good
Clarity: 1 poor
Questions for Authors: Why is X-Prompt weaker than X-Prompt w/o CAL on ID tasks in Table 6? It seems like the data-augmentation done with CAL would only help ID performance?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 1 poor
Contribution: 3 good
Limitations: The authors do a good job highlighting the computational requirements for their methods.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We would like to express our appreciation for your recognition of our work, as well as your constructive feedback and thought-provoking questions. We hope that our following responses will help you better interpret the merits and contributions of our paper, and further improve your impression and evaluation of our paper:
>The proposed X-Prompt approach of combining soft and text prompts is not novel ... this paper would be much stronger if it was framed as the first deep dive into the interaction between text and soft prompts with the novelty coming from the data-augmentation that enables more robust OOD performance... These papers should be mentioned in the related work.
We really appreciate your suggestions for discussing related work more comprehensively and strengthening our paper's contributions by reframing this paper as the first deep dive into the interaction between text and soft prompts for more robust OOD performance. **We will comprehensively discuss the related work and adjust the positioning of our paper appropriately to emphasize this aspect as you suggested more prominently in the revised version.**
> The prose makes assertions about the performance of soft prompts in the OOD setting is poor without citations ... have confirmed that it is hard to use soft-prompts in out of domain settings and should be cited.
> The prose redefines in-distribution (ID) multiple times (not a weakness, just feedback that doesn't fit anywhere else)
Thank you for your suggestions. You're right: soft prompts are hard to work well in the OOD setting and that's the motivation of X-Prompt. **We commit to resolving these citation and presentation issues in the revised manuscript.**
>Differences in performance seem rather small making it difficult to trust the results without some way to capture variance. For example, In the OOD accuracy in Table 6, the difference between Prompt tuning and X-Prompt is 0.6. With the test split being 5% of 52,541 this means X-Prompt only gets 16 extra example correct.
The 52k samples mentioned in Table 3 refer to the sentence-level (tweet-level), while the perplexity (PPL) and accuracy in Table 6 are **token-level metrics**. Considering an average tweet length of about 20 tokens, the 0.6% performance gap corresponds to **312 (=52000 * 5% * 0.6% * 20) additional correct samples**, not just 16 as you thought. We have conducted significance tests and confirmed that the significance level can reach over 95%. In the revised version, we will clarify these details to prevent any misunderstanding.
> The performance of X-Prompt in Table 7 isn't very convincing, it is the strongest in "overall" but that seems to be an artifact of the other methods only being good in one category while X-Prompt is ok in both. It doesn't seems like it actually the best option
In Table 7, the overall criterion evaluates both content and style suitability in the generated results, which is the desired end-to-end evaluation metric to measure the overall quality of generation. Content and style metrics serve as itemized evaluation indicators to help better understand model performance. X-Prompt achieves a good overall score, whereas the NL prompt and soft prompt exhibit significant shortcomings in one aspect, leading to unsatisfactory generation results.
> Rather than framing issues using in-distribution vs out-of-distribution (which seems overly broad) it seems like this paper would be stronger if it was framed to be about task-signaling.
Thank you for your suggestion. As we said before, we will adjust the positioning of our paper appropriately per your suggestion.
> Why is X-Prompt weaker than X-Prompt w/o CAL on ID tasks in Table 6? It seems like the data-augmentation done with CAL would only help ID performance?
CAL makes the learning of imaginary words perform like multi-task learning, while the imaginary word without CAL is analogous to only fitting one task. Therefore, in ID evaluation, the absence of CAL tends to achieve better scores (similar to a single-task learning model performing better on its learned task than a multi-task learning model). | null | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: The paper proposed X-Prompt which instructs an LLM with not only NL but also an extensible vocabulary of imaginary words. Besides, context-augmented learning (CAL) is introduced to learn imaginary words for general usability, enabling them to work properly in OOD (unseen) prompts.
Strengths: 1. A concise idea to combine the merits of NL and soft prompts.
2. The paper demonstrated both descriptive capabilities and OOD robustness of X-Prompts.
3. X-Prompt achieves a good balance of BLEU and accuracy in zero-shot style transfer.
4. The paper is well-written and easy to understand.
Weaknesses: 1. Prefix-tuning[1] method needs to be compared in Experiments.
2. As shown in Table 6, X-Prompt has no significant advantage over Prompt-tuning.
3. The generation results of zero-shot style transfer lack human evaluations.
4. Writing content issues.
(1) Please try to use published sources rather than Arxiv sources in citations.
[1] Xiang Lisa Li and Percy Liang. 2021. Prefix-tuning: Optimizing continuous prompts for generation. ACL 2021.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Have you considered the case where the imaginary word is more than one in length, or where an X-Prompt contains more than just one imaginary word?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We would like to express our appreciation for your recognition of our work, as well as your constructive feedback and thought-provoking questions. We hope that our following responses will help you better interpret the merits and contributions of our paper, and further improve your impression and evaluation of our paper:
> Prefix-tuning [1] method needs to be compared in Experiments.
We conducted experiments with the prefix-tuning method and found its results to be comparable to prompt tuning. However, prefix-tuning requires modifying the LLM forward part, which is not user-friendly and undesirable for deployed LLMs in practice. As we show in our paper, our main goal is to enhance the LLM without changing its deployment (i.e., its forward process). That's why we don't compare with prefix tuning that requires modifying the LLM forward.
> As shown in Table 6, X-Prompt has no significant advantage over Prompt-tuning.
We would like to kindly point out that you may misunderstand results in Table 6: It seems that you may have mistaken X-Prompt (w/o CAL) as our proposed method – in fact, it is the ablated X-Prompt.
In Table 6, the (intact) X-Prompt performs significantly ($p<0.05$; Wilcoxon Signed-Rank Test) better than prompt tuning in the **OOD evaluation** (the last column in Table 6):
| Method | PPL (↓) | Accuracy (↑) |
| :------:|:-----:|:-----------:|
| Prompt tuning |29.5| 38.0 |
| X-Prompt (Our work) | **28.5** | **38.6** |
We would also like to emphasize once again that the primary focus of our study is on the OOD evaluation. The ID evaluation results presented in Table 6 serve merely as a reference.
> The generation results of zero-shot style transfer lack human evaluations.
The reason why we choose the style transfer task for evaluation in addition to open-ended text generation is that the style transfer task has reliable automatic end-to-end evaluation metrics which highly correlate with human evaluation results, which has been confirmed by much previous work (like [1]). We also conducted human evaluation for style transfer tasks on 500 samples and found the human evaluation is indeed consistent with the automatic evaluation metrics. We didn't initially include the human evaluation results in the manuscript due to the 9-page limitation. We provide the results below and will include the result in the revised version:
| Method | Content | Style | Overall |
| :------:|:-----:|:-----------:|:-----:|
| Prompt tuning | 0.27 | **0.82** | 0.23 |
| X-Prompt | **0.64** | 0.80 | **0.60** |
FYI, the Pearson correlation score between the overall score (human evaluation) and H-mean (the automatic evaluation metric) is 0.75, demonstrating that they are actually highly correlated.
> Please try to use published sources rather than Arxiv sources in citations.
We will make the adjustments and use published sources to replace the arxiv sources in the References.
> Have you considered the case where the imaginary word is more than one in length, or where an X-Prompt contains more than just one imaginary word?
We discussed the effect of imaginary word length in **Figure 1 in the supplementary material**. For the style customization, we find one imaginary word is sufficient for achieving good OOD performance. While using more imaginary words might lead to improvements in ID performance, the impact on OOD performance would likely be minimal.
[1] Zhang et al: Parallel Data Augmentation for Formality Style Transfer. ACL 2020 | null | null | null | null | null | null |
Social Motion Prediction with Cognitive Hierarchies | Accept (poster) | Summary: This paper proposes a novel approach to address the social motion prediction problem by introducing a new large-scale multi-person 3D motion dataset featuring intense and strategic interactions among participants. The authors formulate the problem using a multi-agent reinforcement learning perspective and incorporate behavioral cloning and generative adversarial imitation learning to boost learning efficiency and generalization. They also take into account the cognitive aspects of the human social action planning process and develop a cognitive hierarchy framework to predict strategic human social interactions. The proposed approach outperforms state-of-the-art methods in challenging long-term social motion predictions.
Strengths: The main strengths of this paper include:
+ Introducing and analyzing the limitations of existing datasets for multi-person motion prediction tasks.
+ Proposing the use of GAIL to improve the generalization ability of the learned policies.
+ Provide a demo video.
Weaknesses: **Good motivation and interesting methods, but not sufficient efforts to support the claim.** I will state weaknesses as follows:
+ Distribution comparison in Fig. 2 shows the comparison with CMU-Mocap, which is an old dataset. For claiming the dataset novelty, I suggest authors compare your dataset with more datasets here. Only several sub-figures of several joints are not sufficient to reflect the distribution of the full dataset.
+ For dataset comparison, [1] has 2.2 hours of data and [2] is 6.5 hours. These two datasets should be compared.
+ **Over-claim: I think a dataset of less than an hour cannot claim to be a large-scale dataset.**
+ In lines 29-32, why [56, 1, 25] cannot be used for interactive tasks? CMU-Mocap can be used to perform motion prediction and generation tasks.
+ Authors mentioned NTU-RGBD in Section 2.1 but did not compare with it in Table 1.
+ Writing:
+ orphan row (lines 233, 251, Table 3).
+ Line 236: Table 4 -> Figure 4
[1]: Umpm benchmark: A multi-person dataset with synchronized video and motion capture data for evaluation of articulated human motion and interaction.
[2]: InterGen: Diffusion-based Multi-human Motion Generation under Complex Interactions.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: **According to review guidelines, 'if the contribution is a dataset or model, what steps did you take to make your results reproducible or verifiable?' The authors did not claim it in the main paper or abstract (It is important. Do NOT claim it in the appendix). Will authors only make a subset of data public or a full set?**
+ Any ablation of $\lambda$s?
+ Multi-person motion capture datasets can be used not only for prediction but also for motion generation. Recent research (HumanMAC: Masked Motion Completion for Human Motion Prediction) use generative frameworks for motion prediction, and it will be great to discuss motion generation.
+ w/o GAIL and w/o mid GAIL ablation shows that the design works little for global pose and local pose. I would like to discuss this with the authors.
+ "This dataset surpasses the existing ones in scale, diversity, dynamics, and interaction, thus posing new challenges to the social motion prediction problem. " Little experiments to verify how the (1)scale, (2)diversity, and (3)interaction benefit the task?
+ Description of annotation, SMPL or SMPL-X? How many joints?
+ For motivation, why do authors build a dataset with intense, strategic interactions? Why are intense, strategic interactions essential for the task and the community? Intense, strategic interactions property is a characteristic of your dataset, but a lack of motivation. It will be great to show how these properties benefit the task in experiments.
**Although I have many concerns about the dataset, the author's method of technological innovation is still interesting. I provide a borderline score here. If the author could discuss and address my concerns, I would improve my score.**
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 2 fair
Contribution: 2 fair
Limitations: Limitations are discussed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely appreciate the reviewer's constructive and positive feedback. We are grateful that you are interested in our 'method of technological innovation' with 'good motivation'. In the following, we seek to address your concerns:
**Q1: Compare with more datasets and joints.**
A1: We primarily compare with CMU-Mocap [1] as it is the main dataset used by previous works in this area. We will add more dataset comparisons with other datasets and joints following your suggestion.
**Q2: UMPM and InterHuman dataset.**
A2: Please refer to General Response Q1 for comparisons. We will cite and compare with the datasets following your suggestion.
**Q3: Overclaim data size.**
A3: Thanks for your valuable feedback on our dataset size claim. Nonetheles, it is important to note that the dataset is larger than any previously used in the field of multi-human motion prediction. Our dataset provides 3D skeleton ground truths with intense and strategic human interactions from professional atheletes. From such a perspective, we have greatly enlarged the available dataset in this research field for academic uses. We understand how the term "large-scale" might be misconstrued given the temporal length of the dataset. Therefore, in our revised manuscript, we will carefully modify the wording to more accurately represent the size and value of our dataset.
**Q4:Why cannot be used for interactive tasks.**
A4: While it's true that these datasets can be used for motion prediction and generation tasks, their utility for tasks involving interaction - a key focus of our research - is limited. This is due to the relatively low level of interactiveness, as we mentioned in Line 30-32. For instance, the individuals in these datasets are often recorded while moving casually or interacting randomly with others, rather than participating in strategic or purposeful social interactions.
**Q5:NTU-RGB+D.**
A5: NTU-RGB+D contains some action classes with two interacting humans. We provide a comparison in Table 3 of the rebuttal PDF. We will add the comparisons to Table 1 in the revision.
**Q6: Dataser release.**
A6: The entire dataset, as well as our code and models, will be publicly released.
**Q7: Ablation of $\lambda$s.**
A7: We provide an ablation study on $\lambda$ in Table 2 of the rebuttal PDF.
**Q8: Discuss motion generation.**
A8: Thanks for your suggestion, and our dataset can also be potentially used for motion generation. We will cite and discuss HumanMAC and other related works on motion generation in _Introduction_ and _Related Works_ sections of our revised manuscript.
**Q9: GAIL effects.**
A9: Thanks for your insightful comment. In our manuscript, we have briefly addressed this observation in Line 275-276. Training without GAIL tends to increase the errors in long-term trajectory predictions, while it marginally reduces the errors in local pose prediction. Interestingly, we notice that models trained without GAIL tend to generate motions with smaller magnitudes, often resembling a "freezing" effect. This results in visually less appealing and less realistic animations. Moreover, an important aspect to note is the impact of GAIL on the emergence of cognitive hierarchies within the model. In our experiments, the absence of GAIL leads to a lack of discernible cognitive hierarchy.
**Q10: Verify how the (1)scale, (2)diversity, and (3)interaction benefit the task?**
A10: Thank you for bringing up the important point about the benefits of the scale, diversity, and interaction of our dataset for the social motion prediction task. We are pleased to clarify these aspects:
- Scale: Larger datasets are generally considered beneficial for research tasks as they provide a more extensive base for training models, improving their ability to generalize to unseen data. The advantageous size of our model makes it a valuable resource of the task for the research community.
- Diversity: As shown in Figure 4 and discussed in Line 235-242, our dataset exhibits a high degree of diversity. The results demonstrate that the proposed dataset serves as a more convincing benchmark for social motion prediction and introduces more challenges to the field.
- Interaction: As evidenced by Table 2 and Line 245-249, while the HRI [32] model performs exceptionally well on short-term motion prediction, the global context becomes crucial for long-term motion prediction. Our dataset, with its highly interactive nature, offers strong cues for motion prediction with a global context. These interactions are vital to understanding and predicting complex social behaviors.
We appreciate your feedback, and will revise our manuscript to articulate these points more clearly.
**Q11: Annotation.**
A11: Please refer to General Response Q2. We will make it more clear in the revision.
**Q12: Motivation for the dataset.**
A12: Thank you for your insightful question.
- Our primary motivation in focusing on intense, strategic interactions stems from a desire to more accurately capture the complexity inherent in real-world human interactions. These interactions often involve strategic decision-making processes and can be social, competitive, or cooperative in nature. Most existing datasets do not adequately capture these complexities, which is a gap that our dataset aims to fill.
- By incorporating such interactions, we enable the model to learn beyond simple human motion patterns. Our dataset encourages a deeper understanding of human actions, including aspects such as intentions and the decision-making process. Ultimately, our goal is to develop models that not only understand human behaviors but can also interact meaningfully with humans.
- As for how these properties benefit our task, we refer you to discussions in A10 (also Table 4, Figure 2, Line 235-249).
We will revise our presentation to better highlight how these properties of our dataset contribute to the task and benefit the broader research community.
---
Rebuttal Comment 1.1:
Comment: Thanks for the authors' feedback.
+ For A1,
> We will add more dataset comparisons with other datasets and joints following your suggestion.
Please detail what you will revise, which will make your promise more convincing.
+ For A3, similar to my response to A1, please compare all multi-person datasets in detail, which will make claims more fair.
+ For A4. Authors might revise the statements in the manuscript.
+ For A5-A8, will you release your codes about all experiments in the paper, rebuttal, and appendix? Note that your promise will be open if accepted.
+ For `models trained without GAIL tend to generate motions with smaller magnitudes, often resembling a "freezing" effect.`, any metrics to reflect it? I think a freezing score (velocity lower than xx m/s for yy seconds) is needed to verify it. It is better to provide an anonymous demo link to verify it.
+ For scale, diversity, and interaction properties, if it is not as you claim, please revise it. If the task really benefits from the scale and diversity, I suggest providing more experiments in review, but the authors seem to ignore it. If it is confusing, I take an example here. For the existing relatively small dataset A, you train it and zero-shot test it on your dataset. For your dataset, you train it and zero-shot test it A dataset. This verifies the dataset scaling contribution of your work.
+ For annotation, why do you use 15 joints and visualize in the mesh? Do you use Smplify? Playing basketball is highly related to hand motion, how can only 15 joints reflect it? I think it might not be physically reasonable. Besides, does your dataset include ball annotation?
+ For all writing concerns, I suggest authors list all of them for checking.
Overall, I will maintain my score now. My existing concerns will affect my final rating.
---
Reply to Comment 1.1.1:
Comment: We appreciate your recent feedback and suggestions. Below, we present our responses to your insights.:
1. (A1) We plan to compare with two additional datasets, ExPI [19] and Mupots3D [33], and plot all the major body joints.
2. (A3) Please refer to [General Response](https://openreview.net/forum?id=lRu0dN7BY6¬eId=u8w2zSFa4e) (Q1) and Rebuttal Table 3. We will add them to Section 2.1 and Table 1.
3. (A4) We will revise line 29-32 accordingly.
4. (A5-A8) Yes, all experiments in the paper, rebuttal, and appendix will be included in the open source code.
5. (GAIL) Thanks for your suggestion. Time permitting, we plan to include additional metric comparisons and a video in the comments; otherwise, these enhancements will be incorporated in the revised manuscript.
6. (Scale, diversity, and interaction) We appreciate your insightful suggestion. To further emphasize our point, we will incorporate additional experiments beyond those displayed in Table 4 and Figure 2, and refine our presentation to more effectively highlight the unique properties of our dataset. However, it's crucial to clarify that our intention is not to assert that our dataset contributes by *benefiting the existing benchmarks due to its larger scale*. Instead, we posit that it *introduces significant new challenges in a novel area*, specifically the prediction of strategic human interactions as outlined in A12. Our dataset differs markedly from existing ones in terms of motion content (i.e., basketball tactics versus walking, talking, or dancing). Consequently, we don't anticipate existing methods to readily generalize across these markedly different datasets.
7. (Annotation) Please also refer to [General Response](https://openreview.net/forum?id=lRu0dN7BY6¬eId=u8w2zSFa4e) (Q2) and [Response to Reviewer X2WW](https://openreview.net/forum?id=lRu0dN7BY6¬eId=ED3o4BTKWg) (Q5).
* We use 15 joints for fair comparison with existing methods [59, 19, 54, 57, 40, 32] following their practice.
* We use an [existing tool](https://github.com/wangsen1312/joints2smpl) to fit SMPL model using 3D joints. SMPLify fits SMPL using 2D joints.
* We agree that hand motion is important for studying sport motions. Nonetheless, it is a challenging and unresolved task to accurately recover hand motion in a markerless setting given the small size of the hands, their fast movement, and ball-hand occlusions, which should be further investigated in future work. Meanwhile, our decision to utilize 15 joints (same as previous works) in our model is driven by their ability to adequately capture the broader aspects of human movement, and effectively reflect the interactive properties inherent in our dataset.
8. (Writing) We have compiled a summary of the writing-related issues that need to be addressed during the revision process, provided below for your reference:
* Introduction: revise line 29-32, add CHT overview
* Related Work: add more multi-person datasets, add trajectory prediction datasets and methods, discuss motion generation (e.g. HumanMAC)
* Dataset: revise claims, add more visualization comparisons, add annotation details and motivation, add datasets to Table 1, revise Table 1 dataset name and sequences
* Experiments: orphan row (lines 233, 251, Table 3), Line 236: Table 4 -> Figure 4, add GAIL ablation study comparisons
---
Rebuttal Comment 1.2:
Title: I read other reviews and comment here
Comment: I read other reviews and list my comments to authors, other reviewers, ACs, and SACs.
**For Reviewer Nb5i**
+ For predicting 1s concern, I do not think this is a weakness. Prediction of too-long motions is a generation task exactly. therefore, I suggest authors perform generation results in the experiments, which is a better pretext task.
+ I have the same concern on annotation, like SMPL(-X), ball, and equality (foot sliding).
**For Reviewer 8iac**
+ For `Lack of survey of related work with respect to the methods`, can these works be compared with your methods as baselines?
---
Moreover, I check that the authors clicked the Reproducibility as YES, and I am not sure whether authors should provide (inference) codes in the submission. If not provided, it might be reported to the ethics reviewer.
---
Reply to Comment 1.2.1:
Comment: **Q1: Same concern on annotation, like SMPL(-X), ball, and quality (foot sliding).**
A1: Please refer to [Rebuttal to Reviewer X2WW](https://openreview.net/forum?id=lRu0dN7BY6¬eId=KOXpfZdWpJ) (A5) for discussions on SMPL(-X) and ball. Please refer to [Rebuttal to Reviewer Nb5i](https://openreview.net/forum?id=lRu0dN7BY6¬eId=356r6ugS16) (A2) for discussions on foot sliding.
**Q2: Can these works be compared with your methods as baselines?**
A2: Thank you for your suggestion. We have conducted further comparisons with two recent trajectory prediction methods [a, b] on our dataset. In these comparisons, the models are trained specifically to predict the trajectory, while the local poses of the last input frame are maintained as they are. The results, as shown in the table below, indicate that our method surpasses the performance of these baseline methods.
| | Global | | | | Local | | | | Root | | | |
| ------------------- | -------- | -------- | --------- | --------- | -------- | -------- | -------- | -------- | -------- | -------- | -------- | --------- |
| milliseconds | 400 | 600 | 800 | 1000 | 400 | 600 | 800 | 1000 | 400 | 600 | 800 | 1000 |
| Social-STGCNN [a] | 188.2 | 246.4 | 298.8 | 346.5 | 62.8 | 82.8 | 98.9 | 112.1 | 173.1 | 225.6 | 273.7 | 317.9 |
| SocialVAE [b] | 84.0 | 127.2 | 171.7 | 215.1 | 62.8 | 82.8 | 98.9 | 112.1 | 52.6 | 89.7 | 130.3 | 170.8 |
| Ours | **54.6** | **86.2** | **119.3** | **152.5** | **43.7** | **60.8** | **74.6** | **86.6** | **41.7** | **66.9** | **94.8** | **124.0** |
**Q3: I am not sure whether authors should provide (inference) codes in the submission.**
A3: For clarity, you may refer to the guidelines provided in the NeurIPS Paper Checklist: https://neurips.cc/public/guides/PaperChecklist. Under the _Experiments_ section, it is specified that code and data should be submitted if the option is checked. This is distinct from the _Reproducibility_ section. The entire dataset, code, and models, will be publicly released upon acceptance.
**References**
[a] Mohamed, Abduallah, et al. "Social-stgcnn: A social spatio-temporal graph convolutional neural network for human trajectory prediction." CVPR 2020.
[b] Xu, Pei, Jean-Bernard Hayet, and Ioannis Karamouzas. "Socialvae: Human trajectory prediction using timewise latents." ECCV 2022.
Title: Replying to 'I read other reviews and comment here'
---
Rebuttal 2:
Title: After rebuttal
Comment: The authors did not submit any rebuttal. Do authors want to participate in author-reviewer discussions?
---
Rebuttal Comment 2.1:
Title: Rebuttal submitted
Comment: Dear Reviewer yiis,
We appreciate your attention to the review process. We have indeed submitted our 'Author Rebuttal', which includes a general response as well as individual responses to each reviewer's comments. These should be visible to the 'Reviewers Submitted' group. Could you kindly recheck your portal? If there is still an issue, please let us know. To ensure the process continues smoothly, we have also reached out to the conference chairs for their assistance.
Thank you for your understanding and cooperation.
Best regards,
Authors | Summary: The paper addresses the problem of social motion forecasting utilizing a multi-agent reinforcement learning and combines behavioral cloning and generative adversarial imitation learning. Social and strategic interactions are modeled in a “cognitive hierarchy framework”. The paper further introduces a new large-scale 3D multi-human motion dataset based on no-dribble-3-on-2 basketball.
Strengths: The paper is, in general, well-written and the method well-motivated. The novel dataset is highly relevant in the under-explored area of multiple human motion forecasting.
Weaknesses: In general, 1s of forecast for such complex social interactions is too short, as players mostly just continue their motion, e.g. swing of an arm. In basketball there is the 5s rule, I would thus suggest to at least forecast 5s of future motion to get a good understanding of the interactions and of the game. Instead of MPJPE metrics for long-term motion forecasting could be employed, e.g. [b,c].
The method results in the supplementary video exhibit a lot of foot sliding, suggesting a generalization problem for walking motions. I would assume that this is due to using 3D velocities as actions, which are translation invariant but crucially not rotation invariant. The choice of a rotation-variant actions is surprising as the introduced dataset (almost) plays out on a circle, with 3 players on the circle and 2 players within. Did the authors try to alleviate the issue by augmenting the data?
The authors say that "Human motions exhibit greater dynamism in terms of pose diversity and movement speed, making motion prediction more challenging than in previous datasets” (LL42-44) but then say that players are not allowed to “dribble” (L111 & L115) - does that not significantly reduce the range of motion, i.e. there is little global translation?
The authors should elaborate how they obtained body joint angles (L144), Figure 2. In Figure 1 it seems that the authors extract SMPL parameters as pose representation but it is never specified - however, extracting angular representations from 3D joint locations is non-trivial and the authors need to explain how those are produced. The lack of this information makes Figure 2 unconvincing: assuming SMPL was used, the skeletal structure is quite different from the parametric model used in CMU [1], we thus would expect significant differences in angular distributions. For example, hinge joints in CMU are zero everywhere except at the hinge dimension - is this the case in the authors representation as well?
The paper should replicate the MRT [59] experiments - at least for the available datasets CMU Mocap and MuPoTS-3D.
HRI [32] originally was trained on local (normalized) motion and not on global coordinates: in this work, are the global coordinates passed to HRI for training and evaluation? How would HRI perform with normalization such as subtracting the root location at the last input frame? Such normalization can be undone trivially and should greatly improve the (short-term) motion prediction results. The evaluation as is is unfair for HRI if global coordinates are passed.
**Minor issues**
LL54-55: While CHT [9] is cited and put into context (LL98-105) a short overview would be helpful, as it is not clear from the paper what the authors mean by “k represents the depth of strategic thought” (L55).
An interesting dataset for evaluating the approach would be the Haggling dataset [a] where persons play a triadic haggling game, an adversarial game between two players to sell a product to a third person (buyer). This work should be at least cited and compared against in Table 1.
In the context of 3D human motion forecasting planning or “intention” has been addressed as intermediate signal in previous works [c,d] and should be cited.
[a] Joo, Hanbyul, et al. "Towards social artificial intelligence: Nonverbal social signal prediction in a triadic interaction." CVPR 2019.
[b] Gopalakrishnan, Anand, et al. "A neural temporal model for human motion prediction." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2019.
[c] Tanke, Julian, Chintan Zaveri, and Juergen Gall. "Intention-based long-term human motion anticipation." 3DV 2021.
[d] Diller, Christian, Thomas Funkhouser, and Angela Dai. "Forecasting characteristic 3D poses of human actions." Proceedings of the CVPR 2022.
**Rebuttal**
I appreciate the effort made by the authors to answer my questions.
However, I still believe that 1s is too short for what the authors claim their dataset is capable of demonstrating: intricate "strategic" social interplay.
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: I wonder if it would be possible to evaluate the combined motion of all persons, e.g. not just of the per-person poses but also the global configuration of the poses is sensible or not. Imagine, for example, one person throwing the ball to their left-hand person but the right-hand person generates the "receive" motion - which would be unlikely.
It is unclear which version of PoseTrack the authors address in Table 1: PoseTrack17 [23] contains 556 sequences (and not 13) and PoseTrack18 [5] contains 1138 sequences. Could the authors elaborate if this version is a subset of PoseTrack?
I would assume this is an encoding error but in the supplementary video every 4 to 5th frame “freezes” making it a bit more difficult to check the results frame by frame in video (QuickTime Player, OSX).
What was the motivation of color choice for Visual representation of the Cognitive Hierarchy Visualization: I would suggest to use warm colors for the offense and cold colors for the defense as especially purple and rose are too close to each other.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 3 good
Contribution: 3 good
Limitations: The authors address some limitations, notably the assumption that a certain cognitive hierarchy is present in the data.
It would be interesting to see how far into the future the model can predict without producing failing poses or freezing up.
Flag For Ethics Review: ['Ethics review needed: Compliance (e.g., GDPR, copyright, license, terms of use)']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely appreciate the reviewer’s constructive and detailed feedback. We are grateful that the you find our paper 'well-written', our proposed 'well-motivated', and our dataset 'highly relevant in the under-explored area'. In the following, we aim to address your concerns:
**Q1: 1s is too short.**
A1: We appreciate the reviewer's insightful suggestions. We understand the importance of longer-term predictions in understanding complex social interactions and the dynamics of the game. However, we would like to provide the following points to clarify our choice of a 1-second forecast horizon:
- Basketball is characterized by its swift pace, whereby a significant amount of action can occur within a 1-second timeframe. In full-court matches, possession often concludes within few seconds. Our research scenario is even more intense and compact compared to conventional basketball games, and it operates at a faster pace.
- Our video demonstration reveals that the 1-second future doesn't merely continue past motions. Instead, it involves defensive switches and sudden changes in moving directions. As a matter of fact, our level-1 policy is primarily motion continuation, but our policy network learns complex strategic behaviors at higher levels through the proposed cognitive hierarchy design.
- We opted not to use a longer prediction horizon primarily because future motions heavily depend on the uncertain outcomes of passes. For instance, if player A is passing to player B, the subsequent motions will be greatly influenced by whether the ball is successfully received by B or stolen midway. Predicting such outcomes based on current information poses a considerable challenge for any model.
We plan to extend our datasets to include regular basketball games with more information and explore longer prediction lengths for future research.
**Q2: Foot sliding.**
A2: Foot sliding might be partly due to our model not explicitly regularizing foot contact velocities as previous works have done [R17, R18, R19]. Our research primarily targets predicting the strategic action decisions rather than the quality of the motion itself. Hence, we did not initially design our model to address these detailed aspects of the motion quality. Based on the reviewer's suggestion, we plan to explore training techniques such as foot contact loss and data augmentation to enhance the visual quality of the motion predictions. We believe these techniques could potentially alleviate the foot sliding issue.
**Q3: Little global translation?**
A3: The no-dribbling rule in this practice enforces the players to make more strategic decisions. While dribbling is prohibited for the offensive players, it doesn't necessarily result in a significant decrease in the overall range of motion. It's important to note that defensive players are not under the same restriction. They are required to continually move back and forth to maintain adequate defense coverage. This requirement leads to substantial global translation, as evidenced by the video we have provided. Therefore, while the scope of certain specific motions is reduced, the overall diversity and dynamism of human motions are preserved, and the challenge of predicting those motions remains high. The large root error (especially for the 'frozen' baseline) in Table 2 of the submission also demonstrate this idea.
**Q4: Body joint angles.**
A4: As discussed in General Response Q2, our dataset use 3D pose represetation. Previous work MRT [59] converts the CMU-Mocap dataset to a format of 15-joint 3D poses. We strictly followed its definition and perform data analysis on the aligned skeletal structure. We directly computed the angle of intersection formed by the hinged limbs since we do not have the limb twists of the 3D pose representations.
**Q5: Replicate the MRT experiments.**
A5: Following your suggestion, we train and test our method on the CMU-Mocap (UMPM) dataset [40]. Please refer to Table 1 of the rebuttal PDF for the results. Notably, our method exhibits particular strength in predicting long-term root trajectories and global motions.
**Q6: Details on HRI.**
A6: When training and evaluating HRI, we pass in normalized motions following the original settings. Therefore, it achieves respectable short-term motion prediction accuracy in Table 2, as supposed by the reviewer.
**Q7: CHT overview.**
A7: We provide an intuitive explanation on CHT and level-k reasoning in Line 51-55 and Line 195-197. We will add a short overview to Section 2.3 following your suggestion.
**Q8: Haggling dataset.**
A8: Please refer to General Response Q1. We will cite and compare with it following your suggestion.
**Q9: Intention as intermediate signal.**
A9: Thank you for the information, we will add them to Section 2.
**Q10: Evaluating the combined motion.**
A10: The GAIL global discriminator $D$ is developed on a similar idea, which penalizes global state-action pairs that are unlikely to occur. For a quantitative evaluation of the combined motion of all persons, incorporating a user study could be beneficial. We appreciate this suggestion and plan to include such a study in our future work.
**Q11: PoseTrack version.**
A11: For PoseTrack, we compare with the version provided by SoMoF [21], which may cause confusion in number of sequences given different definitions. We apologize for the confusion. We will change the dataset name to 'PoseTrack (SoMoF)' and revise the sequence numbers in next version.
**Q12: Video encoding error.**
A12: Thank you for bringing this issue to our attention. We have thoroughly reviewed the supplementary video and were unable to replicate the issue. Could you please provide us with your OS and QuickTime Player version? We will ensure that our video encoding is widely compatible in the future release.
**Q13: Color choice.**
A13: Thanks for your suggestion! We will change the visualization colors accordingly.
---
Rebuttal Comment 1.1:
Title: References (also appended in the General Response)
Comment: [R17] Tevet, Guy, et al. "Human motion diffusion model." ICLR 2023.
[R18] Tseng, Jonathan, Rodrigo Castellon, and Karen Liu. "Edge: Editable dance generation from music." CVPR 2023.
[R19] Rempe, Davis, et al. "Humor: 3d human motion model for robust pose estimation." ICCV 2021. | Summary: The paper has two major contributions to the field of multi-person motion detection:
1) A new open source dataset that is sufficiently large scale as compared to those already available. But more importantly is dense in terms of strategic interactions and more diversity of action distributions for reinforcement learning algorithms. This is very relevant to the field.
2) To demonstrate the efficacy of the datasets, they also propose a new initiation learning framework combining behavior cloning as well as GAIL.
Strengths: 1) The paper is very well written with clear figures to make the concepts easy to understand.
2) The dataset is a welcome addition to the space of close space multi agent interaction modeling.
3) The experimental section is very detailed with an ablative section to help understand the implications of each of the contributions of the MARL framework.
Weaknesses: The paper suffers from two major issues. The authors ignore a large body of work that are not directly related to the close space interactions but highly relevant to the field of multi-agent trajectory prediction. This includes both datasets as well as methods available to get state of the art results in the domain.
1) Lack of survey of related work with respect to the dataset: Many work in the autonomous driving domain feature highly complex social scenarios with many agents interacting with one other. The datasets include industrial benchmarks like Argoverse, nuScenes, Waymo Open Motion Dataset [1] amongst others. While these datasets including not just pedestrian trajectories but also other road users like vehicles and cyclists, a comparison must be made about how the complex multi-agent crowd level interactions are different and similar to the close space interaction modeling that is being studied in this paper. The crowds of pedestrian in these scene perform complex social maneuvers of working cooperatively ( cross walks) and destructively (jaywalking) as examples and how these strategies are relevant to this field is an important comparison. Given the existence of these other much larger dataset in terms of number of subjects available, it will be important to understand what benefits the new dataset adds to the field.
2) Lack of survey of related work with respect to the methods: On the datasets mentioned above a large body of work exists involving both simple deep learning transformer and non transformer based Bayesian methods as well as reinforcement learning frameworks [4] (as an example) that the MARL framework needs to be compared against.
[1] Waymo Open Motion Dataset
[2] Symphony: Learning Realistic and Diverse Agents for Autonomous Driving Simulation
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: Would it be possible to include the relevant recent work in the field of multi-agent human trajectory prediction?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 2 fair
Limitations: NA
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely appreciate the reviewer’s constructive and positive feedback. We are happy that the you find our paper 'well-written', our proposed dataset 'a welcome addition', and our experiments 'very detailed'. In the following, we aim to address your questions and concerns:
**Q1: Lack of survey of related work with respect to the dataset.**
A1: Thank you very much for the suggestion! We provide a discussion on the multi-agent trajectory prediction datasets in General Response Q1. We will cite and discuss these works in Section 2.1 in the next revision following your suggestion.
**Q2: Lack of survey of related work with respect to the methods.**
A2: Thank you very much for the suggestion! We referred to the relevant recent work in the field of multi-agent human trajectory prediction in Line 20-26. We will cite and discuss more research works in this area in Section 2.2 in the next revision following your suggestion. By the way, is part of the review references ([4]) missing? | Summary: This paper aims to predict multiperson human motion. The main contributions of this paper are:
1. The paper presents a large-scale multi-human 3D motion dataset with intense, strategic interactions.
2. The paper formulates the multiperson human prediction problem as MARL and presents a hierarchy framework to model complex social interactions.
Strengths: The paper has several strengths:
1. Overall, the paper is well-written with a clear and well-motivated introduction. The hierarchy framework is introduced to consider the recursive decisions in the interaction.
2. The paper proposed a large-scale multi-person motion dataset
3. Evaluation on the proposed dataset demonstrates that the method outperforms state-of-the-art methods.
Weaknesses: 1. In L32-34, the paper claims that most existing methods employ end-to-end supervised training which overlooks the cognitive aspects. Considering the state (past motion) and action (future velocity) space defined in this paper, could the author provide more insight on the difference between the BC, GAIL used in the paper and GAN-based supervised learning in related work?
2. While this work qualitatively demonstrates the utility of the proposed framework for tasks specific to motions like sports, it would be beneficial to provide additional comparisons on existing benchmarks in addition to Figure 4.
3. Other multi-person datasets in sports also exist, such as KTH Multiview Football datasets. The paper may compare the proposed dataset and method with them.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: Please check 1. - 3. in Weaknesses.
In addition, why was the best performance achieved when K=3? if the claims of the cognitive hierarchy are true, I was wondering what happened after having more levels of decision.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: The visualization of the output from each level shows some existence of cognitive hierarchies, but it should be more clear if the paper and the demo video could include the visuals in meshes (with ball movement if possible) like Figure 1.
Overall, the author's response to the concerns in the Weakness section is needed to make the final decision. I am happy to increase the rating if my concerns are addressed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely appreciate the reviewer's constructive and positive feedback. We are grateful that you recognize our main contributions and find it 'well-written with a clear and well-motivated introduction'. In the following, we aim to address your questions and concerns:
**Q1: Provide more insight on the difference between the BC, GAIL and GAN-based supervised learning.**
A1: Wang _et al._ [57] initially introduces an imitation learning formulation for single-person human motion prediction with BC and GAIL compared to end-to-end supervised training. Our work extends conceptually from [24, 57] as it adapts BC and GAIL for a multi-agent imitation learning context. Our approach is distinct from GAN-based supervised learning in two primary ways:
- The GAIL objective focuses on making the state-action pairs produced by the joint policies of agents and that of experts indistinguishable. This is in contrast to the GAN framework (_e.g._, [59]), where the objective is to make the predicted future motion indistinguishable from the real ones.
- Our MARL formulation enables a more principled integration with the cognitive hierarchy theory, while existing works overlook the cognitive aspects of human social action planning. By modeling the motion prediction process with chained policy networks, we derive the BC and GAIL regularization for each level accordingly, and propose to share parameters for policy networks $\phi_{(1)} \dots \phi_{(K)}$, both of which lead to performance improvements as shown in Table 3 of the submitted manuscript.
**Q2: It would be beneficial to provide additional comparisons on existing benchmarks.**
A2: Following your suggestion, we train and test our method on the CMU-Mocap (UMPM) dataset [40]. Please refer to Table 1 of the rebuttal PDF for the results. Our approach outperforms the previous methods [32, R2, 59] and is comparable with a concurrent work [40]. Notably, our method exhibits particular strength in predicting long-term root trajectories and global motions.
**Q3: Other multi-person datasets in sports also exist, such as KTH Multiview Football datasets.**
A3: Please refer to General Response Q1 for the discussions. We will make it more clear in the revision.
**Q4: Why was the best performance achieved when K=3?**
A4: Our cognition-inspired framework aims to learn joint policies from real-world expert demonstrations and to discover the explainable cognitive hierarchies. The finding that $K=3$ provides the best fit for the real-world dataset may suggest that this particular depth of strategic reasoning most aptly encapsulates the human decision-making process as captured in our collected data. As we increase the number of hierarchical levels ($K>3$), we continue to observe the cognitive hierarchy visualizations among different levels, though there is a slight increase in the mean prediction error.
**Q5: Could include the visuals in meshes (with ball movement if possible).**
A5: Thanks for your suggestion!
- **Body Mesh**: As discussed in General Response Q2, we use 3D poses in the experiments. While the 3D poses can certainly be fitted to a parametric body mesh (such as SMPL, as shown in Figure 1) for a more intuitive visualization, this process may introduce additional errors. To avoid these potential complications, we chose to present the 3D skeletons as they are -- the direct output of the model, free from potential distortions introduced by fitting. This approach makes it easier to identify potential issues. However, in response to your suggestion, we will include the mesh fitting and visualization scripts in our code release.
- **Ball movement**: We implemented a 3D ball trajectory estimator using an object detector and multi-view triangulation. However, we chose not to include the ball information in model training for two reasons: 1) to ensure a fair comparison with previous methodologies, and 2) due to the fact that ball moves significantly faster than humans, which sometimes result in inaccurate estimations caused by motion blur and detection failures. We are committed to improving our ball detection algorithms, and in response to your suggestion, we will include the 3D ball trajectory estimator and visualization tools in our code release. | Rebuttal 1:
Rebuttal: # General Response
We sincerely thank all reviewers for their meticulous reviews and constructive remarks. It is heartening to note that all reviewers have acknowledged the motivation behind our work, as well as its main contributions — especially the proposed method. We will incorporate discussion of relevant works and additional experiments in the final version accordingly. In the following, we will first address some shared issues on the proposed dataset, then respond to each reviewer's comments respectively.
**Q1: Dataset comparisons**
A1: We discuss the existing multi-person motion datasets in Section 2.1 and provide a comparison with some of them (Table 1 of the submited manuscript). The table contains comparisons with _'existing multi-person motion datasets employed by previous works on the multi-person motion prediction task'_ (table caption). In other words, we primarily compare with the utilized datasets within this domain as referenced in previous works [2, 3, 58, 59, 19, 54, 40, 63]. We recognize that there are other multi-person datasets that are relevant to our work, as recommended by the reviewers. Consequently, we present a brief discussion on the differences between our work and each of these datasets below. **These works will also be cited and discussed in Section 2.1 and Table 1 of the revised manuscript following your suggestions.**
- **Multi-person Sports Datasets (Reviewer X2WW).** There exists a variety of multi-person sports datasets for group behavior understanding, e.g. NBA [R3], MultiSports [R4], Volleyball [R5], NCAA [R6]. However, these datasets are primarily designed for action recognition from RGB videos and lack pose annotations or 3D information. Felsen _et al._ [R7] collect a water polo dataset and a basketball dataset for group trajectory prediction, but they only contain trajectory-level data rather than human poses. The KTH Multiview Football dataset [R8] consists of two subsets (2D and 3D), with the 3D subset containing merely 800 time frames (32 seconds) for two isolated players. Therefore, it might be more suitable for testing or few-shot adaptation.
- **Multi-agent trajectory prediction datasets in autonomous driving (Reviewer 8iac).** As suggested by the reviewer, there are a large body of work on multi-agent trajectory prediction that is _'not directly related to'_ our task, but _'highly relevant'_. We briefly discussed this research area in Line 20-26. Given the diffrent task objectives, the datasets also exhibit different characteristics. The trajectory datasets [R9, R10, R11, R12] possess advanteges such as larger scale and diverse classes of agents. Nevertheless, these datasets only contain coarse-grained interactions at the trajectory level (mainly collision avoidance) and fail to capture the rich and fine-grained human actions. Furthermore, our dataset features intense and strategic interactions, such as frequent sudden direction changes and deception maneuvers, which are largely absent from existing datasets. Thus, our dataset can serve as a valuable supplement to the existing ones, posing additional challenges to this research field.
- **Haggling dataset [R13] (Reviewer Nb5i)**. The Haggling dataset shares relevance with our work as that it also invloves social interactions and body gesture prediction. The main difference is that they focus on a triadic conversational scenario, therefore the interactions are relatively simpler, and the body motions are more static.
- **UMPM benchmark [R1] (Reviewer yiis).** The UMPM dataset is a multi-person mocap dataset. As per the paper, 2/9 of the scenarios contain interaction between the subjects and is suitable for our task: _(6) a conversation with natural gestures, and (7) the subjects throw or pass a ball to each other while walking around_. Compared to it, our dataset is advantageous in terms of a larger number of subjects and more strategic interactions. We also conduct experiments on an extended version of UMPM following [40] and the results are reported in Table 1 of the rebuttal PDF.
- **InterGen [R14] (Reviewer yiis).** This concurrent work introduces a multimodal dataset called InterHuman for text-to-multi-human motion generation. Consequently, their primary objective is to cover a diverse range of interactions and annotate with extensive textual descriptions. In contrast, the focus of our dataset is intense and strategic human interactions for social motion prediction. We believe that both our dataset and InterHuman offer valuable additions to the research community, albeit with different task focuses.
**Q2: Data format**
A2: Our work primarily focuses on 3D body keypoint sequences following previous studies [59, 19, 54, 57, 40, 32], as discussed in supplementary material Line 26-28. The 3D skeletons are also the standard product of markerless multi-view motion capture methods [64, R15, R16] which are more suitable for our data collection scenario than marker-based ones. We use $J=15$ skeleton joints with the same difinition with previous works [40, 59].
---
## Response to Ethics Reviews
We thank the Ethics Reviewers for taking the time to review our paper and detailed suggestions.
1. The athletes were indeed informed that their identities would be kept private. We obtained written consent from each participant, clearly outlining the data collection process and intended uses, ensuring transparency. We will include the consent forms given to participants in the Appendix in the revisions.
2. While gait has been acknowledged as a behavioral biometric in recent studies, gait recognition primarily relies on video or sensor data. It is crucial to clarify that our released dataset does not contain video or sensor statistics, mitigating the risks associated with biometric identification. To our best knowledge, there is no viable solution to recognize person identities from the data we provided given no extra information.
Pdf: /pdf/0e36d3f071a396f86ee932193b070ca286725d47.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: --
Strengths: --
Weaknesses: --
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: --
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: --
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for taking the time to provide your feedback. However, we noticed that the specific details of your review were not included. To better understand your viewpoint and address any potential issues, it would be immensely helpful if you could provide more detailed feedback. | null | null | null | null | null | null |
Functional-Group-Based Diffusion for Pocket-Specific Molecule Generation and Elaboration | Accept (poster) | Summary: In this paper, a functional-group-based diffusion model called D3FG is proposed to generate molecules in 3D for target protein binding. Two generation schemes including joint and two-stage generation schemes are formulated.
Strengths: 1. The paper is well-written and easy to follow.
2. It is the first functional-group-based diffusion model for structure-based drug design.
Weaknesses: 1. The performance improvement over previous methods is limited. For example, in table.2, TargetDiff achieves the lowest JS divergence on most bond distances. In Table. 4, TargetDiff achieves the best performance on the vina score. In Figure. 3 the D3FG does not show an advantage over other baselines with respect to atom type distribution.
2. The evaluations are not comprehensive. For example, besides divergence between bond distances, bond angles and dihedral angles should be evaluated. The influence of the size of the functional group set is not explored.
3. The technical contribution is limited. The application of diffusion models for drug design is not new. It seems the required techniques are proposed in previous works including TargetDiff [7], DiffSBDD [8], and DiffAB [52].
4. Some related works are not discussed. For example, some previous methods also leverage functional groups or motifs for molecule design: JT-VAE [1], PS-VAE [2], and FLAG [3].
[1] Jin et al., Junction tree variational autoencoder for molecular graph generation, ICML 2018
[2] Kong et al., Molecule generation by principal subgraph mining and assembling, NeurIPS 2022
[3] Zhang et al., Molecule generation for target protein binding with structural motifs, ICLR 2023
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. It seems that the reported scores in Table. 4 are different from the original papers of baselines. Could the authors explain that?
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 3 good
Contribution: 1 poor
Limitations: Yes, the authors have adequately discussed the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal:
Thanks for your advice, and we add more experimental details in the Appendix E according to your advice as shown in CQ2 in General Response. Here is the response to your doubts.
Response to W1: On one hand, the improvements in docking scores are not significant, according to Table.4, but the other chemical properties of generated molecules are overall the best. On the other hand, D3FG can generate more realistic drug-like molecules, with complex substructures (Table. 1 in paper). On atom type distribution, it achieves the lowest MAE. Besides, on the other geometries like bond length, bond angle, and dihedral distribution, it shows the overall best performance, according to the Table.2 in the paper and Table.2 and 3 in the CQ2 in General response.
Response to W2: We hope CQ2 in General Response can remove your doubts and change your opinion.
Response to W3: We have discussed the technical contributions of D3FG over TargetDiff and DiffSBDD in the paper, and concluded them as three points. (i) Functional-group-based method, so that more complex substructures with pharmacodynamic function can be generated. (ii) SO(3) Diffusion, to generate the orientation of functional groups regarded as rigid bodies. (iii) Fragment-linker designation, with two solutions whose effectiveness is validated through experiments. The of two-stage schemes, which is similar to classical CADD, perform better. Our contributions over DiffAB can be concluded as: (i) Establish a repository consisting of a functional group with stable substructures, rather than amino acids which have been completely defined so that positions, orientations, and types can be generated through diffusion models. (ii) Consider single atoms as linkers, symmetries of which IPA cannot handle, and use EGNN as another head to generate positions. (iii) Employ Bert-style diffusion (D3PM) rather than uniform diffusion on categorical variables.
Response to W4: We add these methods into related works as shown in CQ3, and hope it helps to enhance the integrity of the article and changes your opinion. the code is open to the public with a bug fixed on May 10, 2023. When we cannot have insights through the source code, details are hard to be provided. Besides,here we give comparisons with FLAG. (i)It is an auto-regressive model, violating the physical rules from the perspective of energy[5], while the diffusion models which consider the global interactions are solutions. (ii)It needs a classifier to decide which atom will be added a bond with the next motif, so the training and the prediction are not end-to-end. In contrast, D3FG generates molecules with a diffusion model as long as the number of nodes is given. (iii) Finally, functional groups are rigid bodies in D3FG, while the motifs in FLAG are 2D smiles (https://github.com/zaixizhang/FLAG/blob/main/utils/vocab.txt), with the structures generated by RDKIT(Line. 188, 189 in motif_sample.py in the same link). The 3D structures of functional groups are obtained by the training set in D3FG, thus avoiding the problem of distribution shift of FLAG since the training/test motif substructures may not match the RDKIT’s generation, which has been discussed in [6]. Besides, since the pre-trained model of FLAG is not provided, we cannot give a detailed experimental comparison.
Response to Q1: All the metrics are reported based on our reproduction of the methods, but there are unavoidable differences from the original papers. We think there are two main reasons:
- (i) Different platform. We take Pocket2Mol as examples. As you can see, the original test platform is 'NVIDIA V100 GPUs with Python 3.8 and Pytorch 1.9.0 a', while in our experiments, it is ' NVIDIA A100(81920MiB) GPU with Python 3.9 and Pytorch 1.12.'. The CUDA version, pytorch version and many environmental variables affect the test results. However, once the protocol is unified on the same one, we can ensure the rigor of experiments.
- (ii) Stochasiticity. All these methods are generative models, so that randomness is unavoidable.
Overall, the differences are acceptable. The vina docking score of Pocket2Mol is reported as -7.288 originally and -7.05 in DiffSBDD, and in our experiments, it is -6.92, of minor and acceptable deviation.
---
Rebuttal Comment 1.1:
Title: Thanks for the response
Comment: Thanks for the authors' response! However, most concerns still exist (e.g., the limited technical contribution and obviously lower baseline performance compared to original papers). The source code is also not provided. Therefore, the reviewer leans towards rejection.
---
Reply to Comment 1.1.1:
Title: Thanks for this timely reply.
Comment: In the rebuttal, we describe our contributions over the previous works in W3, and the improvements of D3FG on realistic generation over previous methods in W1 and W2. If there is any doubts on these two points, please feel free to point out them and let us to explain them.
Besides, for further reproduction, we update our code in a an anonymous repository ( https://anonymous.4open.science/r/D3FG-D1FC/ ), and hope it can help to change your opinion towards a more positive way.
Thanks a lot for your reply. | Summary: Pocket-specific molecule generation has received considerable attention in recent years, and the authors propose a functional-group-based diffusion model to address this task. The model considers the generation of complete molecules as the assembly of functional groups (fragments) and atoms that connect these functional groups. They employ a diffusion-based generative scheme to determine the coordinates, orientation, and types of these fragments. The experiments demonstrate the satisfactory generation performance of the model and its potential applications in molecule elaboration.
Strengths: 1. The paper is well-written, and the narrative is clear and concise.
2. The authors address many important details, including the determination of functional groups, handling chirality, and generating heterogeneous graphs.
3. Overall, while the idea of decomposing molecules is not novel, the authors provide a robust model and conduct experiments to validate its superiority.
Weaknesses: 1. In the Method section, it would be beneficial to provide an overview of the entire method before delving into the specifics of each part.
2. The aromatic elements C:C and C:N in Table 2 could be written in lowercase to maintain consistency with convention.
3. It would be beneficial to include more visualizations (2D/3D) of generated molecules.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: 1. In the Introduction, it is mentioned that protein amino acids are represented as functional groups and linking atoms (Line 42). However, in the Method section, the amino acid space I_aa is preserved (Line 60). Why is the amino acid space still necessary if any node can be considered either a functional group or atom?
2. The functional groups are treated as rigid fragments, and their orientations are predicted. But what if the functional groups have internal rotatable bonds and cannot be regarded as rigid (e.g., O=CNO)? Additionally, how is the symmetry of functional groups considered when defining orientation? For example, a benzene ring can have multiple orientations resulting in the same arrangement of the six atoms. Which orientation is chosen during training?
3. In the experiments, why did you use EFGs to segment molecules instead of BRICS composition, as done in many previous fragment-based works? Why did you choose 25 functional groups? How does this choice cover the total number of atoms in a molecule, i.e., the ratio of atoms in functional groups to the total number of atoms for a molecule? Did you consider other sizes for the set of functional groups?
4. Does the node number include the protein in Line 220?
5. For the molecule elaboration experiment, did you retrain the model specifically for this task, or did you use the same model as in the molecule generation experiment?
6. In Line 288, what does the word "them" refer to? It seems ambiguous.
7. Why does D3FG(EHot) generate molecules with higher affinities? In my opinion, since functional groups with high binding contributions were removed, it is likely that the generated functional groups have weaker binding, resulting in new molecules with weaker affinities. On the other hand, in the D3FG(ECold) experiments, a functional group with higher binding contribution is more likely to be generated, replacing the weaker functional group and leading to higher affinities. Why do the experimental results not align with this explanation?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 3 good
Limitations: The authors have addressed the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely thank you a lot for your appreciation of the work, and the advice with deep insights. Here is the response to your concerns.
Response to Q1. The amino acid index set is preserved because, in the diffusion process, only the linkers and functional groups are added with noise, so the index sets of the three types of nodes are defined for differentiating them. Besides, in description of the amino acid embedding, the index set is also used.
Response to Q2. By using EFGs to decompose the molecules, we find that the functional groups with high frequency usually have stable structures. In detail, we randomly choose one functional group's structures which are split from one molecule, and match all the other functional groups with the same smiles but from other molecules in 3D. We define the substructures are stable if the RMSD is less equal than 0.005A. When there are RMSE larger than the threshold, we pick all of them and repeat the process. In this way, we find the functional groups are stable in 3D, since most of them only have one stable structure, and only two have multiple structures. The orientation is defined as a global vector that is the z-axis in the local frame. For example, in a benzene ring, the six atoms are coplanar, so no matter how the local frame is built, the z-axis is the normal vector to that face, and this normal vector serves as a representation of the rotational orientation.
Response to Q3. Our D3FG method is adaptable and can be applied to various fragment databases. Although technically the selection of EFGs and BRICS can be interchangeable, the EFGs method aligns better with our objectives. Unlike BRICS, which deconstructs molecules into synthesizable building blocks, the EFGs method provides a viable approach to obtaining a series of chemical fragments. Reports have shown that the number of fragments derived from BRICS increases linearly with the size of the molecular library[1]. Conversely, the fragments from EFGs are manually curated, thus providing a robust representation of molecular diversity. These fragments effectively cover drug-like chemical space and have demonstrated efficacy in distinguishing between inhibitors and non-inhibitors[2]. Therefore, we opted to use EFGs in this work, and shifting to the BRICS setting is straightforward and can be done according to the user's preference. Besides, in choosing the functional groups, we aim to establish a repository of functional groups that most molecules can be decomposed in. As you can see in Table 8 in the appendix, the `c1ccc2[nH]ccc2c1’ functional groups are the 25th ones, only 2.3% of molecules have the substructures, so we assure the 25 functional groups are enough to consist of a repository and cover most generation tasks in Crossdocked. In detail, we test the sensitivity of performance affected by repository size in CQ2 in General response. We hope it can remove your doubts.
Response to Q4. Yes, it includes, since D3FG and DiffSBDD represent protein pockets at amino acid levels, the node numbers are much lower than TargetDiff.
Response to Q5. The model is retrained, since the contextual information changes. In molecule generation, the condition is the protein structure, and in the elaboration task, it is the protein and the remaining molecule's fragments.
Response to Q6. 'Them' refers to the 'removed functional groups with the highest scores'. we are sorry that the confusion made by our presentation, and will change the description in the next version.
Response to Q7. As the pharmacophoric sites are usually fixed in a pocket, and the functional groups with high hotmap scores usually lie in the sites. By replacing it with other fragments with different local structures and orientations, the likelihood of enhanced interactions with proteins is higher. However, when removing the functional groups with low scores, the generated functional groups usually lies in the 'cold' sites as a result of energy constraints within the molecule, so optimizing this part of fragments often contributes very little to the binding affinity.
[1] HierS: hierarchical scaffold clustering using topological chemical graphs. Journal of medicinal chemistry
[2] Fingerprint-based in silico models for the prediction of P-glycoprotein substrates and inhibitors
---
Rebuttal Comment 1.1:
Title: Thanks for your reply.
Comment: I appreciate your informative reply. Your thorough explanation has indeed clarified my inquiries. However, it appears that the weakness of the paper mentioned in the review has not been examined or discussed yet.
---
Reply to Comment 1.1.1:
Title: Response.
Comment: Thanks for your reply, and we sincerely appreciate your recognition of our work.
We have adopted your advice and will update the D3FG in the next version.
However, since the revision of paper is not allowed to be uploaded, we list the improvements of our paper of the revised version according to your advices, which is not uploaded.
For W1. We added an overall description of D3FG in Section 4, as
D3FG firstly decomposes molecules into two categories of components: functional groups and linkers, and them use the diffsion generative model to learn the type and geometry distributions of the components.
In this section, we describe the D3FG by four parts: (i) the diffusion model as the generative framework, in which the three variables are generated; (ii) the denoisers parameterized by graph neural networks, satisfying certain symmetries so that the generative model is SE-(3) invariant; (iii) the sampling process in which the molecules are generated by the trained models; (iv) the further problems resulted from heterogenous graph with two solutions.
For W2. The notation is changed according to your suggestions.
For W3. More generated protein-molecules pairs are added in the Appendix E, with 2D and 3D graphs of molecules are attached. | Summary: This paper proposes a so-called functional-group-based diffusion generative model, namely D3FG, to generate molecules with realistic substructures conditioned on the protein binding sites. D3FG represents the protein-ligand docking system as a fragment-based system. The molecule fragments are molecule substructures and the protein fragments are amino acids. Thus, the original protein-ligand docking system is reduced to a coarsen graph with molecule substructures and amino acids as nodes. The molecule substructures are connected with heavy atoms as linkers. Thus, the denoising and diffusion processes are defined with molecule substructure translation probabilities. Although this idea is interesting, the reviewer is concerned about the paper presentation, literature reviewer, and marginal performance improvement.
Strengths: Representing the protein-molecule docking system as the coarsened graph reduces the computation expenses of the diffusion generative models. Recently, this scheme has drawn much attention and there are some related work has already shown the efficiency of this scheme. Moreover, molecule elaboration is also related to ligand optimization, which aim to optimize the ligand efficiency by modifying it structure.
Weaknesses: 1. Limited Novelty and Insufficient literature review. This paper represents the fragment-based method for ligand generation and claims no prior work employs the fragment-based generation scheme. However, there are already some methods that employ fragment-based generation. For example, Deepfrag [1], FLAG [2], and SQUID [3] are all fragment-based generative methods for target-aware molecule design. It is necessary to discuss the difference and highlight the contributions.
2. The paper presentation needs to be improved. For example, what is the intuition of the definition of absorbing state in Line 105? What is the connection between the BERT-style objective and the denoising objective in Eq.4? What are the different levels of nodes in Section 4?
3. Definition of "functional group". The functional group often refers to the molecule substructures that contribute most to the chemical properties. However, as shown in Section 5.1, the functional group in this paper refers to the most frequent substructures. Mining the most frequent substructures for molecule generation is not new and has been employed in [2,4].
4. Marginal empirical improvement. Moreover, there is only marginal empirical improvement over the baseline methods as shown in Table 4.
5. Unfair comparison. Since the proposed method already employs the most frequent substructures for model training, it is unfair to choose the "'Ratio’ of the top ten functional groups with the highest frequency" as an evaluation metric in Table 1.
[1]. Deepfrag: a deep convolutional neural network for fragment-based lead optimization. Chemical Science.
[2]. Molecule generation for target protein binding with structural motif. ICLR 2023.
[3]. Equivariant shape-conditioned generation of 3d molecules for ligand-based drug design. ICLR 2023.
[4]. Molecule Generation by Principal Subgraph Mining and Assembling. NeurIPS 2022.
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: The Questions are listed in the Weaknesses Section above.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 2 fair
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal:
We thank you a lot for your constructive advice and answer your questions one by one, as below.
Response to Q1:
We add the three methods to the related work as discussed in CQ3 in General Response. In detail, the novelty of D3FG and its differences are listed below:
- We did not compare FLAG[2] in method and experiment because the code is open to the public with a bug fixed on May 10, 2023. When we cannot have insights through the source code, details are hard to be provided. Here we give comparisons with FLAG. (i)It is an auto-regressive model, violating the physical rules from the perspective of energy[5], while the diffusion models which consider the global interactions are solutions. (ii)It needs a classifier to decide which atom will be added a bond with the next motif, so the training and the prediction are not end-to-end. In contrast, D3FG generates molecules with a diffusion model as long as the number of nodes is given. (iii) Finally, functional groups are rigid bodies in D3FG, while the motifs in FLAG are 2D smiles (https://github.com/zaixizhang/FLAG/blob/main/utils/vocab.txt), with the structures generated by RDKIT(Line. 188, 189 in motif_sample.py in the same link). The 3D structures of functional groups are obtained by the training set in D3FG, thus avoiding the problem of distribution shift of FLAG since the training/test motif substructures may not match the RDKIT’s generation, which has been discussed in [6]. Besides, since the pre-trained model of FLAG is not provided, we cannot give a detailed experimental comparison.
- For Deepfrag[1], it is a model for fragment-based lead optimization, in which the protein and the parent is used as condition and the fragment type is predicted by the model. So it is more like an elaboration task defined in D3FG. Here are several advantages of D3FG(EHot/Cold) over Deepfrag. (i)Deepfrag just predicts the fragment type, without 3D structures, so that the problems of symmetry are not included, but D3FG generates 3D positions of the molecules. (ii)Although the type is SE(3)-invariant, the model uses CNNs as the backbone rather than EGNN, so the invariance cannot be preserved. Instead, D3FG assures physical symmetries by using EGNN. (iii)D3FG is a generative model with stochasticity, while Deepfrag only gives the probability of the fragment types, as a discriminative model.
- For SQUID[3], it is a shape-conditioned molecule generation, in which the shape is defined as a point cloud as the outline of the molecule’s 3D structure, while our task is pocket-specific molecule generation, where the shape of the pocket may not be similar as the outline of molecules. In the experiments, we cannot find a similar task to SBDD in SQUID. Therefore, the only common idea of D3FG and SQUID lies in `Fragment-based’. To tell the novelty of ours, SQUID is still an auto-regressive model, similar to Pocket2Mol, with the focal atom prediction modules, different from our end-to-end diffusion model. And we use SO(3) diffusion to generate the orientation of fragments.
Response to Q2:
The absorbing state and BERT-style training objective is given in the reference[8] (D3PM) in the new version, we are sorry that we miss the reference. The nodes diffused into an absorbing state can be regarded as masked linker/functional groups, so optimizing a cross-entropy defined in Eq.(4) is like pre-training BERT. As discussed in Sec.4.4, functional groups are regarded as rigid bodies, which occupy a certain volume of space, while linkers are mass points, so the nodes are of different levels: Nodes of functional groups are higher-level nodes consisting of atoms as lower-level nodes.
Response to Q3:
We use EFGS[7] to decompose the molecules, in which the substructures are defined as ‘a group of atoms that has similar chemical properties whenever it occurs in different compounds’, rather than the general defined motifs with substructures that vary with the datasets. We did not aim to ’MINE’ the frequent substructures. Instead, we try to establish a repository of functional groups that most molecules can be decomposed in, and use the repository to generate or elaborate the new molecules. We suppose that the manual establishment of such a repository with both 2D and stable 3D structures is one of our contributions.
Response to Q4:
We admit that improvements are not significant in the binding scores compared with TargetDiff, which is one of D3FG’s limitation, but other chemical properties show advantages. Besides, we argue that D3FG generates more realistic molecules with complex substructures from Table.1 and Table.2 and 3 in the General Response.
Response to Q5:
First, it is our motivation of generating complex functional groups as substructures in molecules (Line 32, 33), so it is natural to compare the generated complex substructures of different methods to show that D3FG fixes the problem. Secondly, even if the complex substructures are generated, they may not be assembled as a valid molecules, because they occupy a large space, and lead to linker-fragment intersection. Therefore, the problem is not trivial, and D3FG(Stage) is a good solution, by regarding the graph as heterogenous and using two-stage generation scheme.
[1] Deepfrag: a deep convolutional neural network for fragment-based lead optimization. Chemical Science.
[2] Molecule generation for target protein binding with structural motif. ICLR 2023.
[3] Equivariant shape-conditioned generation of 3d molecules for ligand-based drug design. ICLR 2023.
[4] Molecule Generation by Principal Subgraph Mining and Assembling. NeurIPS 2022.
[5] Diffbp: Generative diffusion of 3d molecules for target protein binding, 2022
[6] Torsional Diffusion for Molecular Conformer Generation, 2022
[7] Extended Functional Groups (EFG): An Efficient Set for Chemical Characterization and Structure-Activity Relationship Studies of Chemical Compounds. 2015
[8] Structured denoising diffusion models in discrete state-spaces, 2023
---
Rebuttal Comment 1.1:
Title: Reply.
Comment: Thank you for your valuable time and constructive suggestions.
Considering that the author-reviewer discussion phase is nearing the end, we would like to be able to confirm whether our responses have addressed your concerns.
If there is anything else that is not clear, please feel free to contact us.
Best,
Authors | Summary: This paper proposes to generate 3D molecules using functional groups and linker atoms in a diffusion model. Using functional groups as building blocks help the model to generate realistic local structures. In the proposed diffusion model, the atom/functional group type, coordinates and orientation are predicted. In addition, the authors experimented with two strategies, joint and two-stage, to generate heterogenous graph, which contains functional groups and linker atoms. The performance indicates the proposed method achieves SOTA performance in structure-based drug design task.
This paper also proposed a new molecular elaboration task and curated a dataset for this task.
Strengths: 1. This paper implement functional groups in diffusion-based structure-based drug design model for the first time.
2. The performance indicates using functional groups as building blocks can improve QED and SA, which have been a challenge for previous diffusion models.
3. This paper proposes a new task, molecule elaboration, for structure-based drug design and provides a dataset for it.
Weaknesses: The paper proposed a new molecule elaboration task, and some existing baselines (e.g. Pocket2Mol[1], 3DSBDD[2], TargetDiff[3] and DiffSBDD[4]) and can be easily adapted for this task. Currently, there is no performance comparison for molecule elaboration task, and it is recommended for the authors to evaluate their model with existing baselines which can be used for this task.
[1] Peng, Xingang, et al. "Pocket2mol: Efficient molecular sampling based on 3d protein pockets." ICML 2022.
[2]Luo, Shitong, et al. "A 3D generative model for structure-based drug design." NeurIPS 2021.
[3] Guan, Jiaqi, et al. "3d equivariant diffusion for target-aware molecule generation and affinity prediction." ICLR 2023.
[4]Schneuing, Arne, et al. "Structure-based drug design with equivariant diffusion models." arXiv preprint arXiv:2210.13695 (2022).
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. How many percentage of the generated molecules are complete (all the generated atoms are connected into one molecule)?
2. In molecular elaboration task, is the remaining fragments included in D3FG as a condition for elaboration? If not, can the proposed model guarantee that the newly generate fragment is suitable to be connected with the remaining fragments?
3. Why does the two-stage approach consistently outperform the joint training one? Is there any explanation?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal:
We sincerely thank you a lot for your appreciation of the work, and the advice with deep insights. Here is the response to your concerns.
Response to comparison to baselines for molecule elaboration:
As the model for molecule elaboration task in 3D is scarce, we here use STRIFE [1] (auto-regressively-vae) as a reasonable alternative since the STRIFE also extracts the pharmacophoric information with hotspots map. However, the model only elaborates molecules in 2D, so we firstly use the re-trained models to generate 2D molecules conditioned on protein’s structural information and the large fragments without the hottest/coldest functional groups, then use MMFF in RDKit to obtain the conformation, so that the docking score can be calculated. The metrics on metrics are presented below:
| | Vina Score | Vina Delta Aff | Gnina Score | Gnina Delta Aff | QED | SA | LogP | Lipinski |
|-------------|------------|----------------|-------------|-----------------|-------|-------|-------|----------|
| STRIFE(Hot) | -7.01 | 42.86% | 5.13 | 33.35% | 0.483 | 0.722 | 0.811 | 4.220 |
| STRIFE(Cold) | -6.96 | 40.20% | 5.14 | 34.49% | 0.479 | 0.724 | 0.809 | 4.225 |
| D3FG(EHot) | -7.19 | 51.78% | 5.51 | 56.53% | 0.482 | 0.731 | 0.814 | 4.330 |
| D3FG(ECold) | -7.02 | 44.03% | 5.16 | 32.69% | 0.476 | 0.707 | 0.820 | 4.228 |
It shows that in the four chemical properties, the differences are very small. However, in the docking score evaluation, the differences are significant. One of the suspectable reasons is that STRIFE elaborates the molecule auto-regressively at the atom level, and thus the generated fragments in the binding sites are usually small and cannot form complete functional groups with pharmacodynamic function because the problem of early-stopping and unrealistic substructures in the auto-regressive method [2]. And thus, since the D3FG generates larger fragments in the binding sites, interactions are more likely to occur [2, 3].
Response to Q1.
Statistically, in D3FG(stage) 87.16% molecules are generated with all bonds connected, so the restricted validity is 87.16%. In practice, when there are disconnected bonds, we will select the largest fragments as the molecules as DiffSBDD does, and preserve it if the size is larger than 80% of the generated ones. In this way, the non-strict validity is 96.64%. However, in D3FG(joint), the two metrics are only 71.42% and 92.37%. We add the metrics in the Appendix.
Response to Q2.
In the elaboration task, both the pocket and the remaining fragments are all used as contextual information. The model is re-trained on this task, and it only generates one functional group with its type, position and orientation.
Response to Q3.
We think that in the binding system, the functional groups are considered as the fragment level, and the linkers are at the atom level. In section 4.4, we give a brief description of these two levels of nodes. To be specific, in the two-stage scheme, the first diffusion model will locate the pharmacophoric sites and fill the appropriate functional groups into it, and then, the second diffusion model with the other GNN as its denoiser will use the linkers to link them, which is conditioned on pocket and functional groups' structure. Since functional groups are rigid bodies that occupy a certain space, the second model learns that the linkers should not intersect into the generated fragments, so the validity is better, and the generated molecules are more realistic than the ones generated by joint scheme, in which the linkers and functional groups are all regarded at the same level, and denoised by a single GNN.
[1] Incorporating Target-Specific Pharmacophoric Information into Deep Generative Models for Fragment Elaboration. J Chem Inf Model. 2022 May
[2] Diffbp: Generative diffusion of 3d molecules for target protein binding, 2022
[3] 3d equivariant diffusion for target-aware molecule generation and affinity prediction. In International Conference on Learning Representations, 2023
---
Rebuttal Comment 1.1:
Title: Thanks for the response
Comment: Thanks for the reply and all my questions have been clarified.
The authors provide both strict and non-strict validities for D3FG. Please also include the validities of other baselines when updating the Appendix. It is more clear after the authors explain the settings and details of molecular elaboration task, it would be helpful to include these in the final manuscript as well.
---
Reply to Comment 1.1.1:
Title: Thanks for your reply.
Comment: We sincerely thank you for appreciating our efforts in responding. The details with more experimental results and comparisons will be included in the final manuscript. | Rebuttal 1:
Rebuttal: **General Response**:
Here we conclude several common concerns of the reviewers, and respond to them as below:
CQ1: Sensitive Analysis (How the size of functional group repository affects the generative performance?)
We conduct experiments about the effects of the size of the repository on performance. Here, we choose top-5, top-10,... , top-20, top-25 functional groups in Table.8 in the paper as the smaller repository, and give Vina docking scores, Vina Delta affinity, and other four chemical properties in these situations. In the implementation, we add $-1e8$ to the logits corresponding to the removed 20, 15, 10, ... functional group types, to force the model to generate the remaining ones. Table.1 ( or Figure. 5 and Table. 11 ) in the newly updated supplementary material gives details, which will be added in the new version.
Table 1. Effects of number of functional groups on performance
| repository size | 5 | 10 | 15 | 20 | 25 |
|-----------------|--------|--------|--------|--------|--------|
| Vina score | -6.59 | -6.7 | -6.99 | -7.06 | -7.04 |
| Vina delta affinity | 32.14% | 39.39% | 45.42% | 47.33% | 46.58% |
| qed | 0.489 | 0.484 | 0.502 | 0.496 | 0.501 |
| sa | 0.821 | 0.814 | 0.836 | 0.843 | 0.84 |
| logp | 2.774 | 2.795 | 2.802 | 2.759 | 2.821 |
| Lipinski | 4.91 | 4.983 | 4.937 | 4.931 | 4.965 |
It shows that in docking score, when the removed functional groups are small, the performance deterioration is minor, but significant when only 5 or 10 functional groups remains. One of the explanations is that the least common 5 to 15 functional groups just appear very rarely in the training set. For example, 'O = P(O)O', although it is the 15th most common functional group, occurs only 4167/100000 = 4.167%. Therefore, its exclusion will not have a particularly large impact on the overall performance. In contrast, when 'NS(=O)=O', which has a frequency of up to 10%, is excluded, its generation can only rely on the diffusion of linker atoms, which drastically reduces the frequency of occurrence and affects the overall performance. Besides, in QED, SA, and other scores, the differences are not significant.
CQ2: Detailed analysis of geometries like bond angle and dihedral.
We add some experimental results on JSD of bond angle and dihedral of the molecules generated by different methods v.s. references, as below. Table 2 and 3 demonstrate that D3FG generates more realistic drug molecules in comparison with other baselines.
Table 2. JSD of bond angle distributions.
| Angle | Pocket2Mol | TargetDiff | DiffSBDD | D3FG(Stage) | D3FG(Joint) |
|---------|------------|------------|----------|-------------|-------------|
| C-C-C | 0.269 | 0.272 | 0.304 | 0.255 | 0.253 |
| C-C-N | 0.254 | 0.267 | 0.313 | 0.256 | 0.255 |
| C-N-C | 0.286 | 0.241 | 0.319 | 0.269 | 0.277 |
| C-C-O | 0.317 | 0.295 | 0.345 | 0.293 | 0.295 |
| C-O-C | 0.308 | 0.311 | 0.372 | 0.310 | 0.304 |
| C-N-N | 0.294 | 0.276 | 0.301 | 0.270 | 0.281 |
| N-C-O | 0.300 | 0.295 | 0.326 | 0.282 | 0.291 |
| N-C-N | 0.304 | 0.288 | 0.342 | 0.282 | 0.292 |
Table 3. JSD of dihedral distributions.
| Dihedral | Pocket2Mol | TargetDiff | DiffSBDD | D3FG(Stage) | D3FG(Joint) |
|----------|------------|------------|----------|-------------|-------------|
| C-C-C-C | 0.151 | 0.149 | 0.158 | 0.141 | 0.138 |
| C-C-C-N | 0.176 | 0.165 | 0.224 | 0.169 | 0.175 |
| C-C-C-O | 0.183 | 0.159 | 0.206 | 0.156 | 0.164 |
| C-C-O-C | 0.180 | 0.174 | 0.231 | 0.167 | 0.149 |
| C-C-N-C | 0.165 | 0.142 | 0.223 | 0.136 | 0.146 |
| C-C-N-O | 0.277 | 0.270 | 0.285 | 0.264 | 0.293 |
| C-N-C-O | 0.453 | 0.430 | 0.398 | 0.358 | 0.335 |
| N-C-C-O | 0.315 | 0.253 | 0.303 | 0.244 | 0.272 |
| C-N-C-N | 0.340 | 0.317 | 0.328 | 0.254 | 0.263 |
CQ3: More related works on fragment-based molecule generation.
We add a paragraph to the related work in the new version, as below.
**Fragment-based drug design**. Previously, works on fragment-based molecule generation are proposed. For example, JT-VAE[44] generates a tree-structured scaffold over chemical substructures and combines them into a 2D-molecule. PS-VAE[45] can automatically discover frequent principal subgraphs from the dataset, and assemble generated subgraphs as the final output molecule in 2D. Further, DeepFrag [46] predicts fragments conditioned on parents and the pockets, SQUID[47] generates molecules in a fragment level conditioned on molecule’s shapes. FLAG[48] auto-regressively generates fragments as motifs based on the protein structures in 3D.
[44] Wengong Jin, Regina Barzilay, and Tommi Jaakkola. Junction tree variational autoencoder for molecular graph generation, 2019
[45]Xiangzhe Kong, Wenbing Huang, Zhixing Tan, and Yang Liu. Molecule generation by principal subgraph mining and assembling, 2022.
[46]Harrison Green, David R. Koesb, and Jacob D. Durrant. Deepfrag: a deep convolutional neural network for fragment-based lead optimization. Chemical Science, 2021
[47] Keir Adams and Connor W. Coley. Equivariant shape-conditioned generation of 3d molecules for ligand-based drug design. In The Eleventh International Conference on Learning Representations, 2023.
[48]ZAIXI ZHANG, Shuxin Zheng, Yaosen Min, and Qi Liu. Molecule generation for target protein binding with structural motifs. In International Conference on Learning Representations, 2023
Pdf: /pdf/ec794f7b311aa53eb7f4ba51ae9e384b0275670b.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: The paper presents a novel method for generating 3D molecules that bind to specific protein pockets based on a functional-group-based diffusion model (D3FG). The model decomposes molecules into functional groups and linkers and generates their types, positions, and orientations gradually through a denoising process. The model uses equivariant graph neural networks to parameterize the denoisers and ensure the roto-translational invariance of the molecule distribution. The paper also introduces a new task of molecule elaboration, which aims to modify existing molecules based on the fragment hotspot maps of the protein pockets. The paper evaluates the model on the CrossDocked2020 dataset and shows that it can generate molecules with realistic structures, competitive binding affinity, and good drug properties. The paper also demonstrates that the model can perform molecule elaboration and generate molecules with higher affinity than the reference molecules. The paper claims that D3FG is a novel and effective method for structure-based drug design that leverages the pharmacological information of functional groups.
Strengths: The paper proposes a functional-group-based diffusion model for pocket-specific molecule generation and elaboration called D3FG. The method decomposes molecules into two categories of components: functional groups defined as rigid bodies and linkers as mass points. The two kinds of components can form complicated fragments that enhance ligand-protein interactions. In the experiments, the authors claim the method can generate molecules with more realistic 3D structures, competitive affinities toward the protein targets, and better drug properties. The paper is original in its approach to generating molecules given the pockets’ structures of target proteins and its use of functional groups as basic components instead of atoms. The paper is clear in its description of the method and its results.
The paper is well-written and clearly describes the proposed method, its implementation details, and its evaluation results. The authors provide sufficient background information on related work and explain how their method differs from prior methods. The authors also provide detailed explanations of the model's components, such as the functional group decomposition, equivariant graph neural networks, and molecule elaboration. The authors use appropriate visualizations to illustrate their method's outputs and compare them with reference molecules.
The paper's originality lies in its novel approach to generating 3D molecules that can bind to specific protein pockets by leveraging functional groups' pharmacological information. The authors show their method can generate molecules with realistic structures, competitive binding affinity, and good drug properties. The paper's significance lies in its potential to improve structure-based drug design by enabling the generation of novel molecules that can bind to specific protein pockets with high affinity.
Weaknesses: The paper's main weakness is that it does not provide a detailed comparison with prior methods for generating 3D molecules that can bind to specific protein pockets. The authors briefly mention some related work, but they do not provide a comprehensive comparison of their method with prior methods in terms of performance, efficiency, and scalability. The authors also do not provide a detailed analysis of the limitations of their method, such as the types of molecules it may not be able to generate or the types of protein pockets that it may not be able to bind to.
Another weakness of the paper is that it does not provide a detailed analysis of the interpretability and explainability of its method. The authors briefly mention some visualizations and explanations of their method's outputs. Still, they do not provide a systematic analysis of how their method's components contribute to its performance and how they can be interpreted in terms of pharmacological properties.
To improve the paper, the authors could perform a more comprehensive comparison with prior methods for generating 3D molecules that can bind to specific protein pockets, including both quantitative and qualitative analyses. The authors could also perform a more detailed analysis of the limitations of their method and how they can be addressed in future work. Finally, the authors could perform a more systematic analysis of the interpretability and explainability of their method, including sensitivity analyses, feature importance analyses, and pharmacophore analyses.
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: - How did the authors select the 25 functional groups used for their method? How did they ensure these functional groups cover a diverse and representative range of pharmacological properties and structures? How sensitive is their method to the choice of functional groups?
- How did the authors evaluate the quality and diversity of the generated molecules? Did they use any metrics or criteria to measure the novelty, validity, and diversity of the generated molecules? How did they compare their method with prior methods regarding these metrics or criteria?
- How did the authors handle the cases where the generated molecules violate chemical or physical constraints, such as bond angles, bond lengths, steric clashes, or chirality? How did they ensure that the generated molecules were chemically feasible and stable?
- How did the authors handle the cases where the target protein pockets have multiple binding sites or modes? How did they ensure that their method can generate molecules that can bind to different sites or modes of the same protein pocket?
- How scalable is their method to larger, more complex protein pockets and molecules? What are the computational and memory requirements of their method? How does their method compare with prior methods regarding efficiency and scalability?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 3 good
Contribution: 2 fair
Limitations: The authors have not adequately addressed the limitations and potential negative societal impact of their work. The authors only briefly mention some limitations of their method in the conclusion section, but they do not provide a detailed discussion of how these limitations affect their results and how they can be overcome in future work. The authors also do not discuss any potential negative societal impact of their work, such as the ethical, legal, or environmental implications of generating novel molecules that can bind to specific protein pockets.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal:
We thank you a lot for your constructive advice and answer your questions one by one, as below.
Response to Q1:
We use EFGS[1] to decompose the molecules, and analysis the stability of the substructures, so that manually established a repository of functional groups that most molecules can be decomposed in. As you can see in Table 8 in the Appendix, the `c1ccc2[nH]ccc2c1’ functional group is the 25th ones, only 2.3% of molecules have the substructures, so we assure the 25 functional groups are enough to consist of a repository and cover most generation tasks in Crossdocked. In detail, we test the sensitivity of performance affected by repository size in CQ2 in General response. We hope it can remove your doubts.
Response to Q2:
In D3FG, we did not compare these metrics except 'Validity' since we follow the protocol of experiments [2,3], which are different from tasks of molecule generation[4]. Here we conduct experiments on D3FG, and give these metrics on DiffSBDD and D3FG.
'Validity' is calculated as the ratio of generated 3D molecules that are chemically valid; 'Novelty' is defined in [5], where $C$ is the training set; 'Diversity' is the average pairwise Tanimoto distances. Table. 1 gives details.
Table.1. molecule metrics of three diffusion methods.
| | Validity | Novelty | Diversity |
|------------|----------|---------|------------|
| DiffSBDD | 95.61% | 99.98% | 0.704 |
| TargetDiff | 95.73% | 97.42% | 0.718 |
| D3FG | 96.64% | 96.81% | 0.684 |
The difference is minor, and these three models all perform well in these three metrics. The novelty and diversity of D3FG are relatively low because the substructures are fixed and the same as training and test data, so in Tanimoto distance calculation, some fingerprints are likely to be the same.
Response to Q3: We follow DiffSBDD's post-process, by using Openbable to connect the atoms as molecules, and select the largest fragments when there are disconnected atoms. In this way, two metrics are relevant: connectivity and validity. The validity has been reported, and the connectivity is 87.16% of D3FG v.s. 79.52% of DiffSBDD reported in [6].
Response to Q4: The Crossdocked datasets is a paired pocket-ligand datasets, so usually, previous methods (3DSBDD, Pocket2Mol, DiffSBDD, TargetDiff) all focus on 'one-to-one' task, and it cannot be generalized to 'one-to-many' or 'many-to-one' tasks. The molecules are generated by D3FG conditioned on a certain pocket structures, so when it is 'many-to-one' tasks, D3FG will use many pockets contextual information as inputs, and generate the corresponding number of molecules. We cannot ensure that the generated molecules can bind to a set of pockets if we do not know their structures and other information.
Response to Q5:
We here report the average generation time and memory usage of different diffusion methods on test set per 100 samples, and training time and memory per epoch. The batch size is set as 16 in training.
| | Time(Gen) | Memory(Gen) | Time(Train) | Memory(Trian) |
|------------|-----------|-------------|-------------|---------------|
| DiffSBDD | 5'48" | 4396MB | 7'36" | 31154Mb |
| TargetDiff | 15'52" | 8944MB | 34'43" | 44138MB |
| D3FG | 4'44" | 4558MB | 15'15" | 31432MB |
It shows that TargetDiff is the most computationally comprehensive because the nodes in GNNs in TargetDiff are atoms (412.14 on average), and DiffSBDD and D3FG are of no significant differences since the nodes in GNNs are amino acids in protein and atom/functional groups in DiffSBDD(68.10 nodes on average) and D3FG(53.62 nodes on average). Besides, on scalability, D3FG and DiffSBDD can handle most protein pockets, since the number of animo acids in a single pocket hardly exceeds 100.
[1] Extended Functional Groups (EFG): An Efficient Set for Chemical Characterization and Structure-Activity Relationship Studies of Chemical Compounds. 2015
[2] Pocket2Mol: Efficient Molecular Sampling Based on 3D Protein Pockets. 2022
[3] 3D Equivariant Diffusion for Target-Aware Molecule Generation and Affinity Prediction. 2022
[4] E(n) Equivariant Normalizing Flows. 2021
[5] Graphvae: Towards generation of small graphs using variational autoencoders. 2018
[6] Structure-based Drug Design with Equivariant Diffusion Models. 2022
---
Rebuttal Comment 1.1:
Title: Response.
Comment: Thank you for your valuable time and constructive suggestions.
Considering that the author-reviewer discussion phase is nearing the end, we would like to be able to confirm whether our responses have addressed your concerns.
If there is anything else that is not clear, please feel free to contact us.
Best,
Authors | null | null | null | null | null | null |
Energy-Based Cross Attention for Bayesian Context Update in Text-to-Image Diffusion Models | Accept (poster) | Summary: This paper proposed a training-free algorithm to modife cross-attention during inference time that could implicitly minimize the energy in the latent space. This paper formulate this idea from a energy perspective. The authors conduct three experiments - multi-concept generation, image inpainting, compositional editing to demonstrate the effectivenns of the new idea.
Strengths: 1. The idea from energy perspective is innovative and the derivation of the equations are solid.
2. Figure 2 an the associate caption is straightforward that shows the relation between energy value and multi-concept generation quality.
Weaknesses: 1. In experiment section, the authors provide only some generation cases for subjective-quality analysis without any quantitative results. The shown cases after cherry-pick maybe not adequate to demonstrate the effectiveness of the method proposed in this paper.
2. A minor typo in Line 83: "update rule for a state pattern \epsilon" -> "update rule for a state pattern \zeta"
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. You metions "nested hierarchy of energy functions" many times? But what is the definition of it and there's any reference papers?
2. In Eq. (6), what is the definition of softmax_1 and softmax_2? And how to obtain Eq. (6) from Q=softmax_2(\betaQK^T)K? I am a little confused.
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **W1: Lack of quantitative results**. We would like to kindly remind the reviewer that the quantitative comparison results have been reported in appendix D. We will move the results to the main paper in our revised version to emphasize the effectiveness of the proposed method. Also, per requests from other reviewers, we additionally evaluate the quality of generated images using Lpips, which again reveal that the proposed method outperforms baseline methods. Please refer to the general comment 2.
**W2: typo**. We will fix it in the revised paper.
**Q1: nested hierarchy of energy functions**. We have used this term to emphasize that there are many energy functions for each layer, time-step, etc. We do not optimize just a single energy but optimize a nested hierarchy of energy functions during the forward pass of updated context vectors. Specifically, we mentioned “nested hierarchy of energy functions” in order to indicate a unified model-specific energy for the entire UNet. We gently note that it is non-trivial to derive an analytical form of the unified energy for a whole U-Net network, mainly due to the inherent complexities arising from non-linearities.
**Q2: definition of softmax_1 and softmax_2, derivation of Eq. (6)**. The definition for subscripts below the softmax is described in lines 64-68, section 2.2.
eq. (6) could be derived from eq. (4). Eq. (6) is opted for drawing a connection between transformer attention and hopfield energy minimization. For a such connection, the value matrix $V$ is arbitrarily introduced with a mapping $W_V$. For more details, please refer to eq.(10) in [1].
**References**
[1] Ramsauer, Hubert, et al. "Hopfield networks is all you need." arXiv preprint arXiv:2008.02217 (2020).
---
Rebuttal Comment 1.1:
Title: Thanks for your rebuttal
Comment: Dear authors,
Thanks very much for your comprehensive rebuttal and discussion for all reviewers. I have carefully read all the discussion and gone through the paper once more. I understand it better, and I think it will be helpful for the image generation community.
However, I didn't derive all the equations by myself and read all the reference papers, so I can only give a low confidence (2) and keep my original score (5). Please revise the organization or presentation to make the paper more understandable, as other reviewers suggested.
---
Reply to Comment 1.1.1:
Title: Thanks for the positive response from the reviewer
Comment: We sincerely appreciate the positive feedback and support for our work. It is particularly heartening to hear that our work will be helpful for the image generation community. Moreover, we are pleased that our rebuttal and discussions help the reviewer's understanding. We are certain your insights, feedback, and suggestions have improved our work. Rest assured, we will revise the main paper to incorporate sufficient materials and discussions from this rebuttal period, while also enhancing its comprehensibility. Thank you for the positive feedback and constructive discussion again. | Summary: This paper proposes a novel **energy-based** framework that can automatically **update the context** used in cross-attention **without additional training**. They claim the proposed updating process well solves the **semantic misalignment** issue in text-to-image diffusion models.
Strengths: 1. The paper exhibits a clear, engaging, and concise writing style.
2. The paper introduces an interesting energy definition and proves the subtle connections with the attention mechanism used in transformers.
3. The experimental results look promising.
Weaknesses: 1. The definition of the energy function does not appear reasonable. For instance, consider the definition of E(K) in equation 8. E(K) increases as the L2-Norm of k_i, where i = 1, 2, ..., N, increases. The minimum value of E(K) is achieved when all k_i converge to zero. This suggests that the author assumes k_i = 0 has the highest probability (since energy is minimum), which contradicts reality. Additionally, the definition of E(Q;K) in equation 7 also seems unconventional. It resembles E(K;Q) rather than E(Q;K). I have significant doubts regarding the effectiveness of the proposed energy definition. I kindly request the authors to provide a more explicit explanation for why they have defined the energy function in such a manner and clarify the actual significance of these proposed energy functions.
2. There is a lack of quantitative experimental results. While the authors present some promising outcomes, there is a scarcity of extensive quantitative experiments to demonstrate that the proposed framework statistically outperforms the naive cross-attention mechanism. It is imperative to include detailed information about the quantitative experiments conducted to validate the efficacy of the proposed framework.
3. There is no explicit explanation of why energy minimization can help solve semantic misalignment. I can not draw direct connections between them in the paper.
I tend to reject this paper.
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: 1. The definition of the energy function does not appear reasonable.
2. Lack of quantitative experimental results.
3. Lack of explanations of why energy minimization help slove semantic misalignment.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 2 fair
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: In contrast to your concerns, it appears that there are several misunderstandings by the reviewers. To clarify the misunderstanding, we would like to give detailed point-by-point answers below.
**W1 and Q1: The definition of the energy function does not appear reasonable**. Thanks for the important comment. We would like to gently note that the design of proposed posterior energy $E_{posterior}(K; Q) = E_{likelihood}(Q;K) + E_{prior}(K)$ is based on the theoretical foundations of [1] and empirical analyses in figure 9, 10 (appendix).
**Hopfield energy**. Ramsauer et al. [1] propose a modern Hopfield energy $E_{hopfield}(Q; K) = \frac{1}{2} \text{diag}(Q^TQ) - \sum \text{logsumexp}(Kq_i^T)$ of which query update rule is equivalent to the attention mechanism. It includes two different terms: (a) exponential interaction term $\text{logsumexp}$, and (b) the regularization term $\frac{1}{2} \text{diag}(Q^TQ)$ which ensures that the norm of the queries remains finite and the energy is appropriately bounded.
**Design of E(Q; K) and E(K)**. While $E_{hopfield}(Q; K)$ is proposed for a query update, for a context update, we inherit the principles of [1] and define our likelihood energy as $E_{likelihood}(Q; K) = \frac{\alpha}{2} \text{diag}(K^TK) - \sum \text{logsumexp}(Qk_i^T)$, which is symmetric to the $E_{hopfield}(Q; K)$.
That being said, as Reviewer suggested, employing $E_{likelihood}(Q; K)$ in eq(7) as $E_{posterior}(K; Q)$ appears to be a viable option. However, we have observed inferior performance, and we suspected that the problem stems from the design of regularization term.
Specifically, $\frac{\alpha}{2} \text{diag}(K^TK)$ uniformly regularizes the context vector which often over-penalizes contexts and causes the context to vanish. To resolve this issue, instead of uniformly regularizing every contexts, we set $\alpha=0$ and propose $E_{prior}(K) = \text{logsumexp}(\frac{1}{2} diag(KK^T))$ to regularize the smooth maximum of context vectors. By summing $E_{likelihood}(Q; K)$ and $E_{prior}(K)$, we finally get the posterior energy $E(K; Q)$ to be minimized by Bayesian Context Update (BCU), Eq. (12).
Ablation studies in Fig 9 and 10 support our intuitions. Specifically, $\gamma_{reg}=0$ in Fig 9 suggests that the norm regularization is necessary for the quality and semantic alignment of generated images. In other words, regularizer plays a role in preventing a single context vector from excessively dominating the forward attention path. Moreover, Fig 10 suggests that increasing $\alpha$ may over-penalize the contexts which prohibit using $E_{likelihood}(Q; K)$ in eq(7) as $E_{posterior}(K; Q)$.
**W2 and Q2: Lack of quantitative experimental results**. In contrast to your comment, we would like to note that the quantitative comparison results have already been reported in appendix D. We will move the results to the main paper in our revised version to emphasize the effectiveness of the proposed method. Also, per requests from other reviewers, we additionally evaluate the quality of generated images using Lpips, which again reveal that the proposed method outperforms baseline methods. Please refer to the general comment 2.
**W3 and Q3: Lack of explanations of why energy minimization help slove semantic misalignment**. Please refer to the general comment 1. We would like to clarify that the goal of the proposed method is to achieve the *adaptive* context propagation through UNet, which could mitigate the semantic misalignment caused by the use of fixed context embeddings. Here, we would like to remark that minimizing the energy function is equivalent to maximizing the log-likelihood of the probability density function. In this context, minimizing the energy function $E(K|Q)$ corresponds to maximizing the log-likelihood of $p(K|Q)$, which indicates finding the most likely K given Q and finally achieves the *adaptive* context for a better semantic alignment.
**References**
[1] Ramsauer, Hubert, et al. "Hopfield networks is all you need." arXiv preprint arXiv:2008.02217 (2020).
---
Rebuttal Comment 1.1:
Title: Thank you for your rebuttal
Comment: The rebuttal solves part of my issues, especially the experimental part. I am excited to see that the author managed to simply manipulate the attention operation via manually defined energy to achieve better generation results without any fine-tuning or relying on any prior. I am willing to accept the paper if all the experiments are reliable. Although, for me, the theory is not that reasonable or at least not smooth or natural. For instance, the author didn't reply about the actual meaning of why they defined $E(K)$ in this form. They only define $E(K)$ to penalize the norm of $Q$. And that seems to be the only reason why they call their method `Bayesian`, which is a kind of patchwork.
Considering that the other reviewers all agree to accept the paper, I will upgrade my rate to borderline accept
---
Reply to Comment 1.1.1:
Title: Thanks for the positive response from the reviewer
Comment: > Considering that the other reviewers all agree to accept the paper, I will upgrade my rate to borderline accept.
Thank you very much for your constructive feedback and the decision to adjust the score to *“borderline accept”*. We are truly encouraged by your enthusiasm towards our improved results. We are certain your insights and the discussion have improved our work.
Furthermore, we kindly wish to bring to your attention that the updated score has **not** yet been reflected in the review. We would be grateful if you could kindly update this as mentioned. Thank you for your consideration.
> I am willing to accept the paper if all the experiments are reliable. Although, for me, the theory is not that reasonable or at least not smooth or natural. For instance, the author didn't reply about the actual meaning of why they defined $E(K)$ in this form. They only define $E(K)$ to penalize the norm of $Q$. And that seems to be the only reason why they call their method Bayesian, which is kind of patchwork.
Thanks for your positive response. We will make sure that all the experimental results will be incorporated into the final version of the paper so that the results can be reproduced reliably.
Additionally, we wish to emphasize to the reviewer that introducing the prior $E(K)$ is not a patchwork, nor is its intent to drive all $k_i$ to zero. Rather, its main objective is to prevent any element from escalating to unjustifiably high magnitudes.
This is evident when considering the properties of the log-sum-exponential function. Notably, the log-sum-exponential is recognized as a smooth (i.e. differentiable) approximation to the “max” operation, as illustrated by the inequality:
$\max(x_1,\cdots, x_N) \leq \log \sum_{i=1}^N x_i \leq \max(x_1,\cdots, x_N) +\log N$
Thus, by penalizing the max value with the prior term, we aim to constrain its elements from escalating to unrealistic magnitudes. Figures 8, 9, and 10 in our work corroborate these insights. Within our Bayesian framework, this approach offers a relaxed yet plausible prior. The underlying presumption is that a realistic value ought to be finite, and this prior simply represents one facet of the knowledge we wish to apply.
Addressing your initial comment --"the minimum value of E(K) is achieved when all k_i converge to zero. This suggests that the author assumes k_i = 0 has the highest probability ... which contradicts reality"-- it's essential to note that penalizing the maximum value does not inherently push its values to zero. This is particularly true when the prior is paired with the likelihood $E(Q;K)$.
Historical precedence in statistical literature supports this perspective. Take, for instance, Rissanen's renowned “universal prior for integers” used for the model estimation [1]. While it does penalize the model order $N$ through $\log_*()$ function, Bayesian model estimations employing this “universal prior for intergers” don't drive the estimated model order to zero. The popularity of the universal prior for model estimation stems from its relaxed and realistic approach; it asserts a finite realistic model order, representing just one facet of prior knowledge. This mirrors the principles of our prior model.
---
**Reference**
[1] Rissanen, Jorma. "A universal prior for integers and estimation by minimum description length." The Annals of statistics 11.2 (1983): 416-431.
---
Reply to Comment 1.1.2:
Title: Thanks and a Kind Reminder
Comment: Dear Reviewer SVoL,
Thank you for acknowledging our contribution and your decision to increase the score to "Borderline Accept (5)".
However, as the deadline for the discussion period nears, we've noticed that the suggested score update hasn't been reflected. In case you forget or are not aware, we would like to kindly remind the reviewer that you can edit the score in the initial review to reflect your suggested change.
Your prompt attention and feedback on this matter would be greatly appreciated. | Summary: The paper proposes to formulate cross-attention layers using energy-based models such that by minimizing the cross attention energy with respect to the context latent representation, the method can further alleviate semantic misalignments between generated or edited samples and the input descriptions, and allow zero-shot compositional generalization using combinations of cross attention outputs based on multiple inputs.
Strengths: 1. The paper introduces a detailed theoretical formulation of cross attentions using energy-based perspective.
2. The method outperforms existing methods qualitatively on various image generation tasks (e.g., inpainting, multi-concept generation and compositional generation) without additional training.
3. The authors have conducted relatively comprehensive evaluations on the method.
Weaknesses: 1. **Lack of limitations**. We have seen the good side of such method, which is to improve semantic alignment between generated images and text inputs. However, there are often trade-offs. For example, energy-based optimization is quite unstable, so it requires some tuning in terms of step size and the number of optimization steps. It would be good to mention the limitation of the method, for example, how much the method is slower compared to the standard denoising steps.
2. **Lack of experiments on Image quality**. Though I think only context vectors are optimized, image generation quality will be unlikely to change much. I would still suggest evaluating generated images would provide a way to help understand whether such algorithms can change the generation performance. In addition, although semantic alignment seems to improve in qualitative results, we also need to measure it quantitatively. And current metrics (e.g., CLIP) for measuring semantic alignment can be unreliable, thus a human evaluation seems necessary to compare different methods quantitatively.
Technical Quality: 4 excellent
Clarity: 3 good
Questions for Authors: 1. is the method quite sensitive to the hyper-parameters that you use to optimize context vectors for each image? Based on the supplementary material, it seems to me that the method is quite sensitive and often requires a lot of tuning in terms of coefficients $\gamma_{attn}, \gamma_{reg}$ for updates, step size, etc.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 3 good
Contribution: 3 good
Limitations: The authors have included limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **W1: Lack of limitations**. Thanks for the comment. We only modified the forward path of the cross-attention layer and this is equivalent to a one-step gradient descent of defined energy. Although we did not minimize the energy multiple steps to achieve further convergence, we have shown that the one-step energy update (i.e. one-step forward step) effectively reduces the energy, as shown in figure 2, and it results in better performance on multiple tasks. In addition, for the modified forward path, we reuse the $QK^T$ term (algorithm 2 in appendix B), which significantly reduces the additional computational cost. In fact, both Stable Diffusion and the proposed method take ~7.0 seconds to generate a single image within 50 sampling steps with NVIDIA GeForce RTX 3090. However, we acknowledge the reviewer's comment that the proposed method requires step size as hyperparameters, namely $\gamma_{attn}$ and $\gamma_{reg}$, which the user should tune for their specific applications. We address this issue in appendix D by providing guidance on the range of appropriate values for these parameters.
**W2: Lack of experiments on Image quality**. Per the reviewer’s request, we also measured LPIPS to evaluate the quality of the generated image and conducted human evaluation. Please refer to the general comment 2.
**Q1: Sensitivity to the hyper-parameter**. We would like to remark that the hyperparameters, namely $\gamma_{attn}$ and $\gamma_{reg}$, correspond to step size for one-step gradient descent of energy which is natural to tune to get the better performance. However, we also would like to emphasize that fixed hyperparameters are used in the real-image editing task. Remarkably, the proposed method outperforms the state-of-the-art works (see appendix D for the detailed result). This result indicates that the proposed method exhibits robustness within a certain range of hyperparameters, which may vary depending on the task. In order to facilitate the use of the proposed method, we have included a guide for selecting hyperparameters for each task in appendix C. Still, there is a need for tuning $\gamma$ for a newly suggested task so we will add it to the limitation section.
---
Rebuttal Comment 1.1:
Comment: Thanks the authors for the rebuttal and it has resolved some of my questions.
I will keep my rating as it is.
---
Reply to Comment 1.1.1:
Title: Thanks for the positive response from the reviewer
Comment: We sincerely appreciate the positive feedback and support for our work with maintaining a high rating. It is particularly heartening to hear that our rebuttal addressed the reviewer's concerns in a positive direction. We are certain your insights and the discussion have improved our work. Rest assured, we will revise the main paper to incorporate sufficient materials. Thank you for the positive feedback and constructive discussion again. | Summary: This work tackles the semantic misalignment problem of stable diffusion model using the energy-based model framework.The authors first show that each cross-attention in the diffusion model can be seen as one step optimization of a pre-defined energy function. They then formulate a Bayesian update for the context vector accordingly. The authors demonstrate the effectiveness of their method by showing qualitative samples in multi-concept generation, text-guided image inpainting as well as compositional generation tasks.
Strengths: I love the novelty and theoretical framework of this paper. While there are several other works tackle the more controlable image generation/editing problem based on the modification of the cross attention layers, this paper provides a systematical theory framework for this. This might not only help with the improvement on certain task but also facilicate people to form a deeper understanding of the model itself.
Weaknesses: 1. Lack of quantitative results:
While several qualitative samples are shown in the paper, quantitative evaluation over a large number of different samples can be more convincing to judge the model's performance against the baseline. The authors may consider using pretrained model (for example like [1] does) or using human evaluation.
2.Choice of hyper-parameter:
Looking at the supplementary, it seems that a successful editing requires one to choose parameters $\alpha$, $\beta$, $\gamma$ case-by-case. Furthermore, it seems that changing hyper-parameters have a great influence over the generation performance. This might limit the proposed method to be used in real applications.
[1] Training Diffusion Models with Reinforcement Learning
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: 1. The understanding of the whole unet: While we may look at each individual cross-attention layer as updating a certain energy function, I would like to hear the authors' insight on how we can understand the unet as a whole. At different cross-attention layers, the weight are different. And there are a batch of linear and non-linear transformations between each two cross-attention layers. Then can we understand the whole unet as updating a certrain energy? (The cascading update of C seems to suggest that the optimization across the whole unet has some consistency.) And if possible, how?
2. The energy distribution across different attention layers: In figure 2, the authors plot the energy of 16 attention layers across different sampling steps. It seems that at all the steps, the energy will peak at the middle layer and reach bottom at the two ends. Can the authors provide some insight on why this happens?
3. In section 3.1 algorithm, the authors provide their design of updating context vector C. While this seems to work well on the samples shown in the paper, I'm wondering what if other update algorithms are use. For example, what if we don't cascade $C_{n+1, t}$ to the next (n+1)th layer but updates $C$ for all the layers starting from scratch and update them for more than one step?
4. From figure 8 in the supplementary, it seems that increasing $\gamma_{reg}$ instead of $\gamma_{attn}$ encourages pattern to occur. When $\gamma_{reg}$ is zero, no matter how to tune $\gamma_{attn}$, the editing cannot get successful results (no teddy bear here). This can be a bit counter-intuitive. As for my understanding, the attention term should be the one to align the key and query. Are there any explanations on why these happen?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 4 excellent
Limitations: The authors have adequately addressed the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **W1: Lack of quantitative results**. We would like to kindly remind the reviewer that we have already measured CLIP accuracy and DINO-ViT structure distance motivated by [1,2] for the image editing task and compared it with state-of-the-art methods in the appendix. The result shows that the proposed method exhibits the best editing performance while preserving the structure of the original input image. Please refer to the appendix for details. We will move the quantitative results to the main paper in our revised version.
Furthermore, per other reviewers' suggestions, we also conducted a human evaluation and measured LPIPS to evaluate image quality. See the general comment 2.
**W2: Choice of hyper-parameter**.
- $\alpha$: We would like to assure the reviewer that we found $\alpha$ to be the most stable and effective when set to 0, as we mentioned in lines 146-147 and the ablation study results in figure 10 in the supplementary. Therefore, we used it consistently in all experiments, eliminating the need for separate tuning.
- $\beta$: In the case of $\beta$, it plays a role as the temperature of the softmax in the conventional attention operation. Thus, following the lead of previous attention-related studies, we highly recommend setting $\beta$ to $\frac{1}{\sqrt{d}}$, where $d$ denotes the dimensionality of the embeddings.
- $\gamma$: For the hyperparameters, called $\gamma_{attn}$ and $\gamma_{reg}$, we would like to remark that they correspond to step size for one-step gradient descent of energy which is natural to tune to get the better performance. However, we also would like to emphasize that fixed hyperparameters are used in the real-image editing tasks, instead of case-by-case tuning. Remarkably, the proposed method outperforms the state-of-the-art works (see appendix D for the detailed result). This result indicates that the proposed method exhibits robustness within a certain range of hyperparameters. In order to facilitate the use of the proposed method, we have included a guide for selecting hyperparameters for each task in appendix C.
**Q1: The understanding of the whole unet**. Thanks for the constructive question. The paper aims to interpret each individual cross-attention layer from the energy perspective, as the reviewer points out. Formulating a unified energy for the entire UNet architecture is non-trivial, mainly due to the inherent complexities arising from non-linearities. Although it would be out of the scope of this work, we would like to recommend a related work [3, 4] that tries to understand the whole Transformer from an energy perspective.
**Q2: The energy distribution across different attention layers**. Thanks for carefully reading the manuscript. As the reviewer mentioned, there is a tendency for energy along cross-attention layers in Unet. It may be related to the dimensions of feature maps which have not been considered for plotting the energy dynamics (for example, the energy is highest at the bottleneck layer of Unet). However, we would like to note that the key message conveyed by Figure 2 is the relative energy gap before and after applying BCU. The figure highlights the impact of the proposed method on energy reduction. Furthermore, it is evident that the energy difference increases significantly, implying that adaptive context propagation influences the energy of subsequent CA layers, leading to cumulative energy minimization. To ensure clarity and emphasis on the relative energy gap, we will revise Figure 2 in the main paper.
**Q3: Using other update algorithms**. Per the reviewer’s request, we implement a multistep context update for comparison while keeping the propagation of C to subsequent layers. For the multi-concept generation and inpainting tasks, we observe that this multiple update of the context vector actually improves the quality of generated images. This is in the same line as our perspective that one forward path of context vector is equivalent to one-step gradient descent for energy function. Please refer to Figure 3 in the rebuttal pdf for the result. That being said, we also observe that the multiple context update is relatively computationally expensive and a single-step update is usually sufficient for improved performance. Therefore, we decide to use a single-step update with context propagation to subsequent layers.
**Q4: $\gamma_{reg}$ and $\gamma_{attn}$ in Figure 8**. The observation is consistent with Eq. (12), where the regularization term, controlled by $\gamma_{reg}$, plays a role in preventing a single context vector from excessively dominating the forward attention path (lines 127-129). In other words, without proper regularization, other context vectors could dominate the attention path, leading to the neglect of the ‘teddy bear’ concept. We would like to remark that the Stable-Diffusion receives 77 tokens to generate the image. However, with the proper application of the regularization term (step size 0.025 for figure 8 in the supplementary), we can observe that the BCU with non-zero $\gamma_{attn}$ can correctly generate the teddy bear. This result highlights the significance of the proposed BCU in capturing and expressing concepts given by textual conditions but also emphasizes that both the attention term and the regularization term need to be considered together.
**References**
[1] Tumanyan, Narek, et al. "Plug-and-play diffusion features for text-driven image-to-image translation." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2023.
[2] Parmar, Gaurav, et al. "Zero-shot image-to-image translation." ACM SIGGRAPH 2023 Conference Proceedings. 2023.
[3] Yang, Yongyi, and David P. Wipf. "Transformers from an optimization perspective." Advances in Neural Information Processing Systems 35 (2022): 36958-36971.
[4] Hoover, Benjamin, et al. "Energy transformer." arXiv preprint arXiv:2302.07253 (2023).
---
Rebuttal Comment 1.1:
Comment: I extend my gratitude to the authors for their commendable work and thoughtful rebuttal. Their responses have addressed my initial concerns, inclining me towards endorsing this paper. However, as other reviewers have raised additional critiques that I find valid, I will maintain my "weak acceptance" rating for now and await their feedback on whether their concerns have been sufficiently addressed.
---
Reply to Comment 1.1.1:
Title: Thanks for the positive response from the reviewer
Comment: We sincerely appreciate the positive feedback and support for our work. It is particularly reassuring to note that our rebuttal effectively addressed the reviewer's concerns, especially regarding the lack of quantitative results and sensitivity of hyper-parameters. Again, we extend our thanks to the reviewer for the careful and constructive review. | Rebuttal 1:
Rebuttal: We sincerely thank all the Reviewers for their valuable comments. We are encouraged that the reviewers say that “the motivation of the paper is clear” (nFGp), “love the novelty and theoretical framework” (2y1g), “the idea from energy perspective is innovative” (tGPJ) and “Theoretical analysis and explanations are comprehensively conducted” (jCTY), etc.
We have summarized some of the major concerns that were raised from the reviewers below. Point-to-point responses were also included as a reply to each reviewer.
---
**Comment 1**. We further clarify the motivation and contributions of the proposed work in a more concise way. Please refer to the conceptual model figure 1 in the uploaded rebuttal pdf.
**Main motivation**: Our main point is that the conventional use of fixed context embedding is sub-optimal, as illustrated in Figure 1 of the main paper (Stable Diffusion). Such misalignment may be attributed to various causes: error-prone human-designed prompts, limited capacity of pre-trained textual encoders, fundamental gap between CLIP-space and U-Net latent space, etc. In order to mitigate the misalignment, instead of leveraging fixed contexts, we aim to establish an adaptive context by modeling p(context|representations). Note that this is a significant departure from the previous methods which only model p(representations|context) with frozen context vectors.
**Idea**: To model p(context|representations), we propose an energy-based modeling. Specifically, the energy function in Eq. (3) is closely related to the attention mechanism as shown in [1]. Inspired from these results, we define p(K|Q) in the cross-attention space with the posterior energy $E_{posterior}(K; Q) = E_{likelihood}(Q;K) + E_{prior}(K)$. Then, one of the primary contributions of the proposed method is to create a correspondence between the context and respresentation through the cross-attention layers of a diffusion model by minimizing the parameterized energy functions. In contrast to a static and fixed context embeddings, the proposed method enables the adaptive context propagation through an energy minimization such that both Q and C are updated and propagated to subsequent layers. This test time optimization for context and respresenation alignment through cross attention have never been tried before, which may lay down foundation for further research.
---
**Comment 2**. Reviewers are kindly reminded that, due to the limited space, we reported multiple results in the appendix of the original submission, which includes ***quantitative comparison* (section D.5)**, ablation study on BCU, CACAO (Fig 9), analysis on prior energy $E(K)$ (Fig 10), more visualizations (Fig 11-14), etc.
In the attached rebuttal PDF file, we now add quantitative comparisons on compositional editing tasks with the CelebA-HQ dataset. Specifically, we focus on three image-to-image translation tasks: woman $\rightarrow$ man, woman $\rightarrow$ woman w/ glasses, and woman $\rightarrow$ man w/ glasses. For this, we select source images of women without wearing glasses from CelebA-HQ. To ensure fair comparisons, all methods leverage the same version of Stable Diffusion, sampler, sampling steps, etc.
Resp_Table 1 shows that the proposed energy-based framework achieves a high CLIP-Acc while maintaining low Structure Dist and LPIPS. Note that the proposed method works consistently well on the compositional editing, i.e., woman $\rightarrow$ man w/ glasses. Moreover, we conduct a user study to evaluate the perceptual quality of generated samples. Following the protocol in [2], we ask 21 people to rate the scores on a scale from 1(poor) to 5(excellent) on the following questions: Q1) Does the output align with the intended semantics of the target text? (Text-match), Q2) Do the generated images appear realistic? (Realism), Q3) Do the outputs preserve the content information from source images? (Content). Resp_Table 2 shows that the proposed method consistently scored high across all questions. We would like to remark that, for the cat-to-dog case, while the DDIM inversion produces outputs in alignment with the given text, as evidenced by the highest score for Q1, it is notably limited by a significant loss of original content, reflected by the lowest score on Q3. This observation aligns with our findings reported in section D.5 and figure 7 in appendix. We will include all the quantitative results in the revised version of our main paper.
---
**[Resp_Table1] CelebA-HQ results**
|CelebA-HQ | Woman→Man | | | Woman→Woman w/ glasses| | | Woman→Man w/ glasses| | |
|-|-|-|-|-|-|-|-|-|-|
|**Method**| **CLIP Acc (↑)** | **Dist (↓)** | **LPIPS (↓)** | **l CLIP Acc (↑)** | **Dist (↓)** | **LPIPS (↓)** | **l CLIP Acc (↑)** | **Dist (↓)** | **LPIPS (↓)** |
|CycleDiffusion|57.0%|0.156|0.433| l 74.0%|**0.018**|**0.117**| l 63.5%|0.237|0.443|
|Pix2Pix-zero|93.0%|0.043|0.391| l 66.0%|0.024|0.266| l 93.9%|0.052|0.444|
|Ours|**94.0%**|**0.035**|**0.324**| l **99.0%**|0.029|0.292| l **96.0%**|**0.037**|**0.341**|
**[Resp_Table2] User Study**
| |Cat→Dog | | | l Horse→Zebra| | | l Cat→Cat w/ glasses| | |
|-|-|-|-|-|-|-|-|-|-|
|**Method** | **Q1** | **Q2** | **Q3** | **l Q1** | **Q2** | **Q3** | **l Q1** | **Q2** | **Q3**|
|SDEdit|4.15|4.00|4.10 | l 4.00|4.05|4.55| l 4.40|4.40|4.30|
|DDIM-inv|**4.30**|3.85 |2.35| l 4.40|3.50|2.40| l 3.60|3.10|2.10|
|Pix2Pix-zero|3.35|2.80 |2.80| l 3.70|3.05|3.25| l 3.35|3.15|2.80|
|Ours|3.95|**4.00**|**4.40** | l **4.55**|**4.30**|**4.65**| l **4.70**|**4.50**|**4.70**|
---
**References**
[1]: Ramsauer, Hubert, et al. "Hopfield networks is all you need." arXiv preprint arXiv:2008.02217 (2020).
[2]: Kwon, Gihyun, and Jong Chul Ye. "Diffusion-based image translation using disentangled style and content representation." International Conference on Learning Representations. 2023.
Pdf: /pdf/2b98a52ca88a8e82ca00aec71770ee607fba89b6.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: This paper proposes an energy-based model (EBM) framework addresses semantic misalignment in text-to-image diffusion models by incorporating EBMs in each cross-attention layer, minimizing a nested hierarchy of energy functions, and achieving highly effective results in diverse image generation tasks. From the shown figures, they demonstrate the effectiveness of the EBM framework.
Strengths: 1, The proposed framework demonstrates general applicability to diverse tasks, such as inpainting, image editing, and multi-concept generation.
2, Theoretical analysis and explanations are comprehensively conducted to enhance the robustness of the experimental results.
3, Notably, this paper stands out as the first diffusion model study observed to incorporate an energy-based perspective into its formulation.
Weaknesses: 1, The experimental results are only including the qualitative comparison with some other methods. In my view, some quantitative numbers should also be shown in the paper to demonstrate the performance of this method compared to others.
2, For multi-concept generation, the Custom Diffusion [1] has been released during the end of last year, it should be considered as a comparison.
3, The same comparison limitations also happened in inpainting (considering Blended Diffusion[2], GLIDE[3]) and image editing (Prompt2Prompt[4], pix2pix-zero[5]).
4, For evaluation metrics, there are also commonly used metrics in different sub-area. For example, the CLIP-score/Lpips/Structure-Dist can be considered in image editing tasks.
[1] Multi-Concept Customization of Text-to-Image Diffusion
[2] Blended Diffusion for Text-driven Editing of Natural Images
[3] GLIDE: Towards photorealistic image generation and editing with text-guided diffusion model
[4] Prompt-to-Prompt Image Editing with Cross Attention Control
[5] Zero-shot Image-to-Image Translation
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: As I stated above, the main drawback of this paper is in the experimental part. There are lack of qualitative comparisons with correct methods. Instead, the original Stable Diffusion model is always serving as the baseline comparison. Furthermore, there are no sufficient quantitative results to show the efficiency of the proposed framework in this paper.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 3 good
Limitations: As I stated in the weakness and questions.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **W1: Some quantitative numbers should also be shown in the paper**. In contrast to your comment, we would like to remind the reviewer that the quantitative comparison results have already been reported in appendix D. We will move the results to the main paper in our revised version to emphasize the effectiveness of the proposed method. Also, per requests from other reviewers, we additionally evaluate the quality of generated images using Lpips, which again reveal that the proposed method outperforms baseline methods. Please refer to the general comment 2.
**W2: more baseline in multi-concept generation**. Thank you for the suggestions. However, Custom Diffusion is for the personalized generation, which requires a few images that contain the desired concepts. In contrast, the proposed method only depends on a given textual condition to generate multiple concepts simultaneously. Thus, it seems not a fair comparison. Instead, we compare the performance with Composable-diffusion [1] which is commonly compared for the multi-concept generation. Please refer figure 8 in the rebuttal pdf. We will add this result to the revised paper.
**W3: more baseline in inpainting and image editing**. Per your suggestion, we conduct further comparison studies with Blended Latent Diffusion [2]. Please refer to figure 9 in the rebuttal pdf. We will add the results in the revised paper. Also, we would like to kindly remark that the performance of recent algorithms for image editing already has been compared and reported in appendix D.
**W4: evaluation metrics**. We have used CLIP-acc and Structure-Dist by following [3, 4] (see appendix D). We will move the quantitative comparison results to the main paper in our revised version to emphasize the effectiveness of the proposed method. Also, per the requests from reviewers, we further evaluated the quality of generated images from the real-image editing task via Lpips. Please refer to the general comment 2.
**References**
[1] Liu, Nan, et al. "Compositional visual generation with composable diffusion models." European Conference on Computer Vision. Cham: Springer Nature Switzerland, 2022.
[2] Avrahami, Omri, Ohad Fried, and Dani Lischinski. "Blended latent diffusion." ACM Transactions on Graphics (TOG) 42.4 (2023): 1-11.
[3] Tumanyan, Narek, et al. "Plug-and-play diffusion features for text-driven image-to-image translation." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2023.
[4] Parmar, Gaurav, et al. "Zero-shot image-to-image translation." ACM SIGGRAPH 2023 Conference Proceedings. 2023.
---
Rebuttal Comment 1.1:
Title: Thanks for your response
Comment: Thanks to your rebuttal reply. I already proceeded to review the supplementary material to check your referenced information. Consequently, I would like to address my final inquiry: in the context of multi-concept composition, are you achieving something similar to Attend-and-Excite [1] framework? Is Attend-and-Excite comparable to your proposed methodology?
[1] attend-and-excite: attention-based semantic guidance for text-to-image diffusion models
---
Reply to Comment 1.1.1:
Title: Thanks for the additional comment.
Comment: We appreciate the reviewer's follow-up query and are pleased to provide further clarification.
In response to the reviewer's query, our answer is affirmative. In general, *Attend-and-Excite* performs well for multi-concept as it is designed for that specific goal. However, the proposed method would be better for textual conditions beyond the simple union of concepts (e.g. A and B, A with B). This is largely attributed to the effectiveness of the adaptive context update.
The *Attend-and-Excite* is motivated by the intuition that "at least one patch in its attention map should exhibit a high activation value for a token to be manifested in the generated image". Consequently, *Attend-and-Excite* updates latent at each time point to ensure that the most neglected subject token is more attended in the cross-attention layer. In our context, this could be classified as a query-only-update mechanism with a fixed context.
While *Attend-and-Excite* effectively generates images based on the textual condition that contains a union of two concepts, it might be limited for more complex textual conditions. For example, we observe that *Attend-and-Excite* is prone to ignore the relationship between two concepts (e.g. A cat *wearing* a shirt, A blue dog *on* the orange sofa).
In contrast, the proposed method adaptively updates the context including each concept and the relationship. In fact, the proposed method successfully generates images that reflect not only multiple concepts but also their relationship (please refer to figure 11 in appendix).
Unfortunately, we're constrained from uploading supplementary figures or links at this time.
However, we would like to suggest that the reviewer tries out the *Attend-and-Excite* demo on HuggingFace.
- prompt 'A cat wearing a shirt' with seed [0] and token_indices [2,5]
- prompt 'A blue dog on the orange sofa' with seed [0] and token_indices [2,3,6,7]
Note that the result is similar even if we apply the *Attend-and-Excite* to the relationship tokens (e.g. wearing, on). | Summary: This paper proposes Bayesian Context Update (BCU) and Compositional Averaging of Cross-Attention Output (CACAO). The main idea is to view the cross-attention between text and image as optimizing an energy-based model, and modify the intermediate outputs according to the energy. A series of examples are shown to demonstrate the effectiveness of BCU and CACAO.
Strengths: 1. The examples are visually intriguing.
2. The energy viewpoint of the cross-attention mechanism in Stable Diffusion could be enlightening.
Weaknesses: 1. I can understand that Q_t is the H * W * C dimensional feature in the network processing the latent-space image at time step t, and C_t is the context vector which is an optimizable variable added by the authors. C_t is initialized to C_clip, then it is optimized going through the layers. As the layers go deeper, C_t is shifted from C_clip, which means the semantic meaning is shifted from the original user text input. My question is: how severe is the shift? How do the authors control the shift? In fact, it makes more sense to me if Q_t,l are the variables to be optimized.
2. More clarification on the examples in Figure 2 is required. I tried Stable Diffusion by myself, and found Stable Diffusion can generate "A monkey and a hat" with the correct semantic meaning. I kindly ask the authors to elaborate how the Stable Diffusion examples are picked. Moreover, Stable Diffusion + Ours degrades the image quality significantly (all the images become black-and-white). Could the authors provide some explanation on why that happens?
3. More quantitative results should be added to prove the effectiveness of the proposed method. In the provided qualitative examples, the advantage of the proposed method over Composable-Diff and Plug-and-Play is marginal. Since the results of generative models are heavily dependent on the random seeds, I suggest the authors to provide systematic quantitative evaluation to benchmark the proposed method. I do notice the quantitative evaluation part in the Appendix, but only animal transition seems limited.
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: Please refer to Weakness.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 2 fair
Limitations: Limitations are properly discussed at the end of the paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **W1: How severe is the shift? How do the authors control the shift?**. Thanks for pointing out the important question that is deeply related to the motivation of our work. We would like to emphasize that the change of context vector is adaptive, not an invalid shift, because the proposed BCU allows an adaptive propagation of the context vector through UNet, resulting in a context vector that is better aligned with $Q_t$ on cross-attention space. For more details, please refer to the general comment 1. Moreover, we would like to remark that control of step size ($\gamma_{reg}$) and a residual path for context update (Eq. (12)) can avoid deviating outside the appropriate semantics.
**Experiment.** To further support our claim, we use the updated $C_T$ ($T$ for a number of sampling time steps) from the proposed method as a fixed context vector instead of $C_{clip}$ and perform multi-concept generation using the conventional cross-attention operation. Since the energy functions are defined differently for each sampling time step, using $C_T$ as a fixed context for each sampling time step may result in low-quality samples. However, this approach allows us to evaluate whether the updated context contains the correct semantics of the given textual conditions, in contrast to $C_{clip}$. Figure 2 in the rebuttal pdf demonstrates that the updated context vector does indeed capture the correct semantics of the given textual conditions (e.g., both black horse and yellow room). We will include this result in the revised paper to further strengthen the evidence for the effectiveness of the proposed method.
**W2: More clarification on the examples in Figure 2, degradation of the image quality**. We would like to note that when the Stable Diffusion can generate correct semantic meaning, the improvement via the proposed method is marginal (refer to figure 7 in the rebuttal pdf). Therefore, we sampled multiple times with random seeds from 0 to 40, and improve the performance of the incorrect Stable diffusion sample cases.
For Figure 2 in the main paper, we recognize that increasing $\gamma_{attn}$ is helpful to avoid gray image generation. Please refer to Figure 6 in the rebuttal pdf. This is related to the role of $\gamma_{reg}$ which is a regularization term. Finally, we would like to kindly note that our method does not significantly degrade the image quality. Please compare the 1st and 3rd rows of Figure 3. Indeed, the curated black-and-white images are the outcomes inherent to the behavior of the original stable diffusion model upon which our approach is founded.
**W3: More quantitative results**. Per the requests from reviewers, we further compare the performance of human face editing and measure additional metrics such as LPIPS and human evaluation to evaluate the quality of generated images. Furthermore, we compare the proposed method with additional baseline methods. Please refer to the general comment 2. Lastly, we would like to remark that the provided quantitative evaluations are by following the prior works [1, 2] on this research area.
**References**
[1] Tumanyan, Narek, et al. "Plug-and-play diffusion features for text-driven image-to-image translation." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2023.
[2] Parmar, Gaurav, et al. "Zero-shot image-to-image translation." ACM SIGGRAPH 2023 Conference Proceedings. 2023.
---
Rebuttal Comment 1.1:
Title: Thank you
Comment: The authors conducted additional experiments and clarified the unclear parts. Therefore I increased my score to 5. I suggest the authors to incorporate the new results in the later versions of their paper. Thank you for the efforts!
---
Reply to Comment 1.1.1:
Title: Thanks for the positive response from the reviewer
Comment: We sincerely appreciate you taking the time to provide your insightful feedback and raising the score. We are certain your insights and the discussion have improved our work. We will incorporate your suggestion and new results into the next version of our paper. | Summary: This paper aims to optimize the context representation through energy based formulation of the cross-attention within the U-Net at test-time to achieve semantic alignment between the textual representation and the image features in the U-Net.
Experiments are performed for the multi-concept generation, text-guided inpainting and image editing.
Strengths: + The motivation of the paper is clear and is supported with adequate experiments.
+ The approach allows for compositional generation. The qualitative results show that the approach does better than simple DDIM+inversion.
+ Multiple concepts can be composed together in the generated images with test-time optimization.
Weaknesses: - The formulation of the gradient posterior in Eq. 9 is missing the term on the expectation of the gradient on the right side?
- The difference and the motivation to optimize the energy wrt the keys instead of queries is not so clear? Ablations with the two approaches can be done to show the benefits and attention visualizations may help to evaluate the difference between the two.
- The subscripts 1, 2 below the softmax are not defined.
- In lines 150-151 what is meant by the "forward path of cross-attention"?
- Low sample diversity: Example in figure 2 seems to harm the sample diversity wrt Stable diffusion.
- What is the computational overhead? How many optimization steps are required to converge to a good solution?
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: Please see weaknesses above.
Specifically following clarifications are required:
1. The motivation for optimizing keys over queries.
2. The role of the regularization term in Eq. 12.
3. In theorem 1 the updates are wrt to the keys, however, eq 14 considers the energy wrt to the queries. The whole idea of the paper in terms of novelty from theorem 1, then why use eq. 14?
The results and comparisons with the baselines can be moved to the main paper. Theorem 1 and Eq 14 need to be better discussed (better combined in one section) and highlighting the contributions and the prior work clearly.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 2 fair
Limitations: Limitations are adequately discussed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **W1: The formulation of the gradient posterior in Eq. 9**. We would like to clarify that the notation E in eq. (9) denotes the energy function defined in eq. (7) and (8), not the expectation. We realize a typo in eq. (1) that should be corrected as \mathbb{E}. We will fix eq. (1) in our revised paper.
**W2: The difference and the motivation to optimize the energy wrt the keys instead of queries**. We would like to clarify that optimization for the energy w.r.t the queries have already been done by the cross-attention operation as described in proposition 1 and eq. (6). For the difference and the motivation to optimize the energy w.r.t key, please refer to the general comment 1.
**W3, 4: The subscripts 1, 2 below the softmax, Forward path of cross-attention**. The definition for subscripts below the softmax is described in lines 64-68, section 2.2. The “forward path of cross-attention” means one operation of the conventional cross-attention, which is described in eq. (6). While eq. (6) only outputs updated query Q, the proposed BCU means updating the context vector C according to eq. (12) and propagating it to the subsequent UNet layer. Please refer to figure 1 in the rebuttal pdf file.
**W5: Low sample diversity**. Regarding figure 2 in the main paper, it seems that the influence of $\gamma_{reg}$ is significant, causing the monkeys to appear more grayish because the role of “reg” is to normalize the context. If we were to increase $\gamma_{attn}$ by two-folds, the result would inherit the diversity of the Stable Diffusion (see figure 6 in the rebuttal pdf). We will include a discussion of this in our revised paper. That said, we would like to point out that the proposed method can generate more diverse images compared to Stable-Diffusion that may not accurately reflect the context and lead to low diversity as shown in figure 3 in the main paper and figure 11 in the appendix. Thus, the proposed method proves to be more beneficial in generating a wider range of images through textual guidance.
**W6: Computational overhead and optimization steps**. We would like to emphasize that the additional computational cost is negligible. Both Stable Diffusion and the proposed method take 7.0 seconds to generate a single image within 50 sampling steps with NVIDIA GeForce RTX 3090. Specifically, for the modified forward path, we reuse the $QK^T$ term (algorithm 2 in appendix B), which significantly reduces the additional computational cost. Also, we would like to clarify that we have not *explicitly* optimized image representations and context embeddings. The primary contribution of the proposed method is to establish a link between cross-attention operation and energy minimization. This allows us to effectively modify the cross-attention operation, resulting in adaptive propagation of the context embedding that better aligns the image features than a fixed CLIP embedding. Throughout our work, we have demonstrated that a single step of cross-attention, which is equivalent to one-step gradient descent for a defined energy function, is sufficient to improve performance for various tasks. In particular, we can observe the emergence of multiple concepts related to given textual conditions in the early phase of generation, which implies a fast convergence of the energy function (figure 2 in the main paper).
**Q1: The motivation for optimizing keys over queries**. Please see the general comment 1.
**Q2: The role of the regularization term**. The regularization term in Eq. 12 originated from the conditional energy function w.r.t K (7) and the role is described in lines 127-132. In summary, it constrains the energy of each context vector $k_i$, preventing it to explode during the maximization of logsumexp($Qk_i$, $\beta$). Specifically, $\gamma_{reg}=0$ in Fig 9 suggests that the norm regularization is necessary for the quality and semantic alignment of generated images. Our regularizer plays a role in preventing a single context vector from excessively dominating the forward attention path. For more details, please see the general comment 1.
**Q3: eq. 14**. As mentioned by the reviewer, the primary contribution of the paper is theorem 1. However, we would like to highlight that the essence of the paper is the energy perspective interpretation of the cross-attention operation. In other words, what we have proposed in the paper are two different forward paths (each for query and key) of cross-attention layers that aim to minimize different energies (eq. 14 and eq. 9). Please refer to figure 1 in the rebuttal pdf.
In this context, we could readily extend the energy perspective interpretation to compositional generation by formulating the task as sampling from the posterior distribution as described in equation (13), leading to the novel energy (14) and method CACAO (16). It is important to note that CACAO is designed to focus on the query update, allowing for the effective incorporation of multiple textual conditions into image generation, while BCU enables adaptive context propagation. The compatibility of BCU and CACAO for compositional image generation is evident, as demonstrated by the ablation study result in the appendix.
In summary, the proposed energy perspective interpretation of cross-attention operation is a powerful tool that could be leveraged from a better understanding of the stable-diffusion model to improved performance for various applications.
**Q4: move results to the main paper, better discuss Theorem 1 and Eq 14**. Agreed. Moving the comparison results to the main paper is beneficial and we will revise the manuscript accordingly. Also, we will consider re-organizing the manuscript per reviewers' suggestion to highlight the contribution and main motivations of this work.
---
Rebuttal Comment 1.1:
Title: Thank you for the rebuttal
Comment: I have read the rebuttal and the concerns raised have been adequately addressed. That said, the main limitation of the work is in the organization which the authors promise to revise. Taking into account that the peer reviewers are positive that these changes will be made and included in the later versions I will raise my score to 5. I urge the authors to also include a discussion in the main paper or in the supplemental on the computational cost for completeness. Thank you.
---
Reply to Comment 1.1.1:
Title: Thanks for the positive feedback
Comment: We truly appreciate the positive feedback and raising the score. We are pleased to hear that our rebuttal could adequately address the reviewer's concerns. We certainly agree that it would be beneficial to incorporate the quantitative results and additional results from this rebuttal period into the main paper. Rest assured, we will revise the main paper to incorporate sufficient materials. Thank you for the constructive discussion again. | null | null |
Hypothesis Selection with Memory Constraints | Accept (poster) | Summary: The paper studies hypothesis selection in a streaming setting, with samples arriving online. The main result gives a near-optimal memory-sample trade-off for hypothesis selection. Along the way, the paper invents several new techniques to avoid expensive memory usage of prior work.
Strengths: This is one of the most classic questions in nonparametric statistics. Prior work [MS08, DK14] have tackled the run-time problem. I believe that its memory efficiency is also an important aspect. This paper offers exactly such a study.
At a technical level, the paper nearly resolves the question, up to some minor logarithmic factors. The high-level approach is simple and may be practical. The techniques, as far as I know, are novel and could potentially be used by later works. I have not checked the proof details in the appendix, but based on the main body, the main arguments are sound.
The paper is generally well-written.
Weaknesses: I do not find any major weakness of this paper.
One thing I should point out is that the imperfect comparison issue (discussed in section 2.2.1) has been studied in the literature of noisy sorting. In fact, I believe the main result from the random ladder tournament, *at least* in the offline setting (section 3.1), can be derived simply in a blackbox way from Theorem 3.8 of Sorting and Selection with Imprecise Comparisons by Miklós Ajtai, Vitaly Feldman, Avinatan Hassidim and Jelani Nelson (TALG 2015) (https://dl.acm.org/doi/10.1145/2701427). In particular, their $\delta$ is your $3\Gamma$, the error gap between good and bad hypotheses in the comparison procedure (Algorithm 1). Their number of items there is your number of hypotheses, both denoted by $n$. So up to another factor of $k=3$, applying their randomized max-finding algorithm would yield a hypothesis that’s $9\Gamma$ close to the optimal. (Another related work is https://arxiv.org/abs/1606.02786 where an improved algorithm is given and an application to hypothesis selection is explicitly derived. I have not checked the details, though.)
I would ask the author(s) to confirm this and see if an alternative argument using the paper above can be made and whether it’s interesting.
I should note that this, if true, does not trivialize the results in this paper. The algorithm of Ajtai et al cannot be implemented in the memory-constrained setting, below a memory size of $n^{1/3}$. Also in the offline setting, the resulting constant is not optimal for the hypothesis selection problem.
Minor comments
---
Line 62–67: list these two query access assumptions as \begin{itemize}
Line 65: “ the probabilities of the Scheffé sets” — I think it’s more clear to refer to this as “the probability masses of the Scheffe sets”.
Line 117: “we sample a uniformly random new hypothesis from H” — Please clarify: is this sampling with replacement, if I view the meta-distribution as uniform over the n input hypotheses? That is, a hypothesis can be selected multiple times against the same single hypothesis in memory.
Line 202: Regarding [MS08], mention that the simple linear-time selection from Appendix A doesn’t do such expensive preprocessing.
Line 265: Can you clarify in which scenario or parameter regime would you use Scheffé counts? (I didn’t read the appendix.) Note that the memory usage of this technique is quadratic in $n$. Hence, are you tracking these counts on a small subset of hypotheses?
Technical Quality: 4 excellent
Clarity: 3 good
Questions for Authors: To avoid the assumption of having access to the probability of Scheffe sets, have you considered approaches that don’t require Scheffe sets at all? I understand that the minimum distance estimate [DL01, MS08] is used in the all-go-against-all tournament step. Can ideas like that be used throughout, so we don’t need query Scheffe sets?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 3 good
Contribution: 3 good
Limitations: The author(s) have discussed some future directions, though I think a conclusion section summarizing these would be great.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > the main result from the random ladder tournament, ... can be derived simply ... from ...(https://dl.acm.org/doi/10.1145/2701427).
Thank you for the reference. We will be sure to add it to our discussion of previous work. That said, we did not see how the main result can be derived in a blackbox way from Theorem 3.8.
Defining $val(H) := \|H - P\|_{tv}$, we can view hypothesis selection as the task of approximately minimizing $val(H_i)$ over $i \in [n]$ given a noisy comparator for these values. To apply their Theorem 3.8 to this task, we need to identify the distance parameter $\delta$ for which our COMPARE subroutine (our Theorem 1) is always guaranteed to produce the correct comparison between two hypotheses whose values are at least $\delta$ far apart. For every pair of hypotheses $H_i, H_j$, our COMPARE subroutine correctly reports that $val(H_i) \ge val(H_j)$ as long as $val(H_i) \ge 3 \cdot val(H_j)$, and this is tight.
Thus, the distance parameter we can achieve is lower bounded by $\delta \ge \min\{val(H_i), val(H_j)\}$. Taking the maximum over all pairs $i, j$, we see that the best (smallest) $\delta$ we can achieve is the second largest value of $val(H_i) = \|H_i - P\|_{tv}$, taken over all $i \in [n]$. This can be arbitrarily large as a multiple of $OPT = \min_{i \in [n]} \|H_i - P\|_{tv}$, and so approximating the minimum value to within $O(\delta)$ may not bring us within $O(OPT)$.
Of course, would be happy to be corrected on this point if the reviewer has a different argument in mind.
-----------------------------
> Another related work is https://arxiv.org/abs/1606.02786 where an improved algorithm is given ...
This paper (henceforth "AFJOS16") is absolutely relevant, and we thank the reviewer for pointing it out.
AFJOS16 contains an algorithm for solving hypothesis selection (density estimation) making linearly many comparisons and running in $O(n\log n)$ time, that achieves accuracy ratio $\alpha = 9$.
As far as we understand, our algorithms improve on AFJOS16 in two respects.
1. It is not straightforward how to implement the AFJOS16 algorithms with $o(n)$ bits of memory. In particular in their "Q-Select" algorithm, one maintains a set of hypotheses of size $\Omega(n)$.
2. The accuracy parameter $\alpha$ we obtain is better (smaller) than that of AFJOS16. Note that AFJOS16 solves the density estimation problem without promise (no knowledge of $\Gamma$), so to draw a fair comparison, we make this statement about the "no promise" version of our algorithm achieving $\alpha = 5$ (which is less than the factor of 9 they achieve). See Section A, Theorem A.2 of our submission.
We will make sure to cite and discuss this paper in future versions of the paper.
-----------
> Line 117: “we sample a uniformly random new hypothesis from H” — Please clarify: is this sampling with replacement,...
Good question. We draw the hypotheses from a meta distribution in an i.i.d. manner. If the meta distribution is a uniform distribution over $\mathcal{H}$, then we sample uniformly from $\mathcal{H}$ with replacement, implying that hypotheses can be selected several times. These repetitions do not affect our analysis as long as we ensure that $H^*$ shows up with probability at least $p_0$ at every step.
We use the generality of the meta-distribution in our main result, where we apply the ladder to the (not necessarily uniform) distribution on hypotheses that comes from the output of a particular subroutine.
A second reason is that the meta-distribution version of the ladder allows to amplify the success probabiltiy of the algorithm by running for more steps without changing the approximation guarantee. (Naive ways of amplifying success probability require another layer of comparisons, which changes the approximation constant.)
We will edit our text to make sure these points are clear.
-------------
>Line 265: Can you clarify in which scenario or parameter regime would you use Scheffé counts? ...
Your observation on quadratic memory use of Scheffé counts is right. For algorithm with $b$ bits, we focus on $t \approx \tilde{O}(\sqrt{b})$ hypotheses. $t$ is chosen so $O(t^2)$ Scheffé counts fit in $\Theta(b)$ bits for any comparison among $t$ hypotheses.
We have used the Scheffé counts for two results:
1) A basic tradeoff presented in Lemma C.1
2) Main result presented in Section E.4.
We discuss each of these in a bit more detail in case it is helpful:
1. We start by off the sketch for our basic tradeoff: We store the Scheffé counts for $t$ random hypotheses. In this way, we can easily compare these $t$ hypotheses, so we can run the random ladder tournament for $t$ steps. We repeat this process again. We take the winner of the last step and add $t-1$ random hypotheses. Draw fresh sample and obtain the Scheffé counts of these $t$ hypotheses, and move forward with another $t-1$ steps of the random ladder tournament. This approach leads to a tradeoff described in Lemma C.1. This tradeoff is off by a factor of $t \approx \sqrt{b}$ due to the inefficincy of the Scheffe counts. However, this tradeoff has a better dependency on $\epsilon$ (and in some way on $\log n$) than what one gets from Lemma 3.1: in Lemma C.1 we have $b \approx O(t^2)$ instead of $b \approx O(t \log n/\epsilon^2)$ in Lemma 3.1.
2. We exploit this advantage of Scheffé count later in the proof of our main result. The reason that we can tolerate this extra factor of $\sqrt{b}$ is that in the final algorithm the random ladder is used on only $k$ filtered hypotheses where $k
\ll n$ (roughly, $k \approx \sqrt{n})$. Hence, we have some slack to tolerate $\sqrt{b}$. On the other hand, if we use the other simple tradeoff (Lemma 3.1) that is based on sorted lists, we get a factor of $1/\epsilon^4$ that we cannot improve. Therefore, while the simple tradeoff of Lemma C.1 is worse than Lemma 3.1, this is the tradeoff we used in our main theorem. See Section E.4 in the supplementary material for more details.
---
Rebuttal Comment 1.1:
Comment: Thank you for the response! I maintain my rating and recommend accept.
Could you fix the LaTex in your rebuttal?
---
Reply to Comment 1.1.1:
Comment: Thank you for your prompt response. We apologize for the formatting issues. Since the deadline has passed, we cannot edit our text. However, we have added that part of our response below:
> the main result from the random ladder tournament, ... can be derived simply ... from ...(https://dl.acm.org/doi/10.1145/2701427).
Thank you for the reference. We will be sure to add it to our discussion of previous work. That said, we did not see how the main result can be derived in a blackbox way from Theorem 3.8.
Defining $val(H) := \|H - P\|_{tv}$, we can view hypothesis selection as the task of approximately minimizing $val(H_i)$ over $i \in [n]$ given a noisy comparator for these values. To apply their Theorem 3.8 to this task, we need to identify the distance parameter $\delta$ for which our COMPARE subroutine (our Theorem 1) is always guaranteed to produce the correct comparison between two hypotheses whose values are at least $\delta$ far apart. For every pair of hypotheses $H_i, H_j$, our COMPARE subroutine correctly reports that $val(H_i) \ge val(H_j)$ as long as $val(H_i) \ge 3 \cdot val(H_j)$, and this is tight.
Thus, the distance parameter we can achieve is lower bounded by $\delta \ge 2 \cdot \min\left(val(H_i) , val(H_j)\right)$. Taking the maximum over all pairs $i, j$, we see that the best (smallest) $\delta$ we can achieve is the second largest value of $2\cdot val(H_i) = 2\cdot \|H_i - P\|\_{tv}$, taken over all $i \in [n]$. This can be arbitrarily large as a multiple of $OPT = \min_{i \in [n]} \|H_i - P\|_{tv}$, and so approximating the minimum value to within $O(\delta)$ may not bring us within $O(OPT)$.
Of course, would be happy to be corrected on this point if the reviewer has a different argument in mind. | Summary: The paper studies the problem of hypothesis selection under a memory constraint. Here one is given $n$ distributions $H_1, \dots, H_n$ with access to an oracle that can output 1) $H_i(H_j > H_k)$ for any $i,j,k$ and 2) $1(H_i(x) > H_j(x))$ for any $i,j$ and any point $x$ in the underlying space. Given a stream of observations $x_1,x_2...$ from some unknown $P$ the task is to select an estimator $\hat{H} \in (H_i)_{i=1}^n$ that is as close as possible to $P$ in TV distance up to a multiplicative and additive constant $\alpha,\epsilon$ respectively. Moreover, one wishes to do so using at most $b$ bits of memory at any point during the algorithm.
The author's main result Theorem 1 shows that $s$ samples from $P$ are sufficient to output a suitable estimator with constant error probability, provided $bs \gtrsim \tilde{\mathcal \Omega}(n\log(n)/\epsilon^2)$ and note that $bs \gtrsim n$ is necessary by previous work. They achieve this result by introducing multiple technical ideas, the main one being the 'random ladder tournament'.
Strengths: - The paper is mostly clearly written
- The paper appears technically sound
- The paper introduces simple but novel technical ideas that not only near-optimally tackle their proposed problem, but also recover other known results.
Weaknesses: - My main concern is that due to the technical nature of the paper the short format of neurips is simply not enough for a meaningful presentation of the results. The motivating problem is under a memory constraint, yet just as one would learn how one can trade off memory for samples in the random ladder tournament (Lemma 3.1) the paper ends. More generally, from the main text, I don't feel like I have learnt how the algorithm actually works. In sections 1.2, 2 and 3 we are given glimpses of the key technical ideas necessary but it doesn't feel cohesive.
Typos:
Page 4 "Results in a better..."
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: I would suggest to shorten (or remove) the proof of theorem 4 and to replace it with a (sketch) proof of the memory-sample trade-off.
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The authors address limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > due to the technical nature of the paper the short format of neurips is simply not enough for a meaningful presentation of the results
We will endeavor to present at least some of the main ideas concisely in any future short versions, at NeurIPS or elsewhere. Like many NeurIPS submissions with detailed technical components, we present full descriptions of our algorithms and their analyses in the supplementary materials.
We intended Section 2.1 to explain our key ideas and the general flow of the paper. We selectively chose the random ladder tournament to be the main technical tool we present in addition to one of our basic tradeoffs presented in Lemma 3.1. We will add a more detailed discussion regarding these results in the main body of the paper (see also the response to Reviewer 5dQu).
Regardless of the outcome of the NeurIPS reviewing process, we will upload the full version of our work to arXiv. | Summary: The authors study the problem of hypothesis testing for pdfs in a streaming model. The problem is to find a hypothesis H* (corresponding to a pdf) from a family {H1,...,Hn} that is closest to an unknown pdf P. The input is a stream of points drawn i.i.d. from P. At any time step the algorithm can ask for a new point or query a scheffe set of two distributions Hi and Hj in {H1,..,Hn}.
The main theorem in the paper shows that the algorithm can "learn" P properly by using O(n\log n/b\cdot\frac{\log(1/\eps)}{\eps^2}) queries, where b is the memory used. This is close to optimal - within log n factor.
Note:
- proper learning a pdf means: |P-H*|_TV\le\alpha\cdot OPT({H1,...,Hn}, P)+\eps.
- scheffe set of two distributions H1, H2 is {x : H1(x)<H2(x)}
Strengths: - Clean presentation and summary at the beginning of the paper.
- Almost tight bounds (upto log factors)
Weaknesses: - Perhaps the model can be motivated better? Why the restriction to scheffe sets?
- Worth comparing the algorithm to vast statistical literature on Neyman-Pearson tests using log-likelihood computation.
- Need to compare and cite some of the vast literature on orienteering tournaments in directed graphs - I suspect some of the underlying results may already be known.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: Why not design an online Neyman-Pearson test - compute approximate log likelihoods based on the query results? The log-likelihood can then be used to upper bound the TV distance.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 3 good
Limitations: NA. The paper is theoretical in nature and is fairly upfront about it.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal:
> Why the restriction to scheffe sets?
Scheffé set queries provide a generic computational model for expressing algorithms that apply to many families of distributions; this avoids assumptions on particular data formats or functional forms for the distributions. Scheffé queries are also sufficient for implementing existing algorithms for hypothesis selection. The fact that our algorithms only need to access the distribution family $\mathcal{H}$ using Scheffé set queries makes our positive results stronger.
We note that the nearly-matching lower bounds, based on communication-complexity arguments, apply to all algorithms -- in particular, including those not based on Scheffé sets.
> Need to compare and cite some of the vast literature on orienteering tournaments in directed graphs
We appreciate the suggestion. Unfortunately, we had trouble identifying which technical ideas would be relevant to our submission. We searched for papers on orienteering, but only found work on a class of NP-complete optimization problems that are variants on the Traveling Salesman problem. (A representative example is "The Directed Orienteering Problem" by Nagarajan and Ravi in *Algorithmica* 2011.) Perhaps the reviewer could point us to a starting point for the literature they had in mind?
> vast statistical literature on Neyman-Pearson tests
We will add some further discussion of this literature. However, note that maximum likelihood approaches do not generally work well for hypothesis selection in total variation ($\ell_1$) distance, even when we just want to select among two distributions. This is discussed, for example, in the textbook of Devroye and Lugosi, Section 6.4. We reproduce their counterexample here.
Suppose $H_1$ is a uniform distribution over $[-1, 1]$, and $H_2$ is a uniform distribution over $[\delta, 1+\delta]$ for some parameter $\delta \in [0,0.5)$ that we determine later. Let $P$ be the uniform distribution over $[0,1]$. Then
$$0.5 = \|H_1 - P\|_{tv} > \|H_2 - P\|_{tv} = \delta\,.$$
The correct output for the selection problem is $H_2$ once $\delta$ is sufficiently small (say, sub-constant). However, the MLE will select $H_1$ if any of the data samples fall in $[0,\delta]$. For $\delta \gg 1/s$, where $s$ is the number of samples, we will see a sample in $[0,\delta]$ with high probability. There is thus a fairly wide range of $\delta$ for which the maximum likelihood estimate will be incorrect. | Summary: This paper studies the problem of agnostic distribution learning where i.i.d. samples are generated from an unknown distribution $X$. The goal is to find the best distribution from a given set of finite distributions $\\{H_1,\cdots,H_n\\}$ that is closest to $X$ under total variation distance. The authors specifically study the sample complexity, under which the memory that is needed to store the information of the samples is bounded. The main contribution is a trade-off between the number of bits of the memory and samples needed, which is tight up to a $\log n$ factor in the minimax sense.
The main proof technique is based on the so called "random ladder tournament". This differs from conventional approach that first estimating the probability of Scheffé sets of all pairs in $\\{H_1,\cdots,H_n\\}$ and then selecting a distribution closest to the estimates. In contrast, the authors propose an ingenious streaming algorithm that tracks the "best" distribution in an online fashion, which is suited for achieving their desired trade-off between memory usage and sample complexity.
Strengths: The paper's primary strength lies in its novel "random ladder tournament" algorithm. This algorithm effectively achieves a memory-efficient sample complexity for agnostic distribution learning. The approach incorporates some interesting technical ideas that may be of independent interest. As far as I'm aware, this particular scenario has not being studied in the literature.
Weaknesses: My main concern regarding this paper is its overall significance to the machine learning community. The paper has a strong TCS flavor, and the introduced model appears somewhat contrived. Specifically, the derived trade-offs, though mathematically intriguing, do not offer any surprising insights. Thus, its practical implications and utility within a broader machine learning context may be limited.
Additionally, the paper's presentation could certainly benefit from some improvements. The authors dedicate a significant portion of the introduction to outlining their techniques, which, unfortunately, are quite challenging to grasp without delving into the substantive technical details found mainly in the appendix.
I would recommend eliminating most of this preliminary discussion and focusing primarily on the "random ladder tournament" argument, as outlined in Section 3. Providing a more detailed and accessible explanation of this central argument could make the paper more digestible for readers. Following this, a more straightforward discussion of the techniques used to improve the logarithmic terms would be better placed and easier to understand. This reordering should contribute to a more coherent and engaging narrative throughout the paper.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: See comments above.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: No issue with negative societal impact.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > overall significance to the machine learning community
Hypothesis selection is a fundamental problem in statistical learning theory. The formulation we study here abstracts and generalizes many specific distribution selection/estimation tasks (some of which are discussed in the book of Devroye and Lugosi, "Combinatorial Methods in Density Estimation").
Meanwhile, when one takes a computational perspective, memory and communication are important resources to control in practical implementations, and their study gives rise to a variety of theoretical questions.
Our goal in formulating this problem was to study memory/sample tradeoffs in a general setting, and we hope our results serve as a launching point for studying these resources systematically.
Note that general hypothesis selection algorithms (i.e., ones that do not use information specific to the distributions under consideration) play an important role in several model-specific learning learning problems. This includes the Gaussian mixture-learning algorithms of Daskalakis and Kamath. Papers on this topic regularly appear at NeurIPS. Examples include:
[Private hypothesis selection](https://papers.nips.cc/paper_files/paper/2019/hash/9778d5d219c5080b9a6a17bef029331c-Abstract.html). M Bun, G Kamath, T Steinke, S Z Wu. *NeurIPS 2019*
[Nearly tight sample complexity bounds for learning mixtures of gaussians via sample compression schemes](https://proceedings.neurips.cc/paper_files/paper/2018/hash/70ece1e1e0931919438fcfc6bd5f199c-Abstract.html). Ashtiani, S. Ben-David, N. Harvey, C. Liaw, A. Mehrabian, and Y. Plan. *NeurIPS 2018*
[Near-Optimal-Sample Estimators for Spherical Gaussian Mixtures](https://proceedings.neurips.cc/paper/2014/hash/c0f168ce8900fa56e57789e2a2f2c9d0-Abstract.html). A T Suresh, A Orlitsky, J Acharya, A Jafarpour. *NIPS 2014*
[Near-optimal density estimation in near-linear time using variable-width histograms](https://papers.nips.cc/paper_files/paper/2014/hash/287e03db1d99e0ec2edb90d079e142f3-Abstract.html). S Chan, I Diakonikolas, R Servedio, X Sun. *NIPS 2014*
> I would recommend [...] focusing primarily on the "random ladder tournament"
That's a good suggestion. It agrees with some of the comments from other reviewers. We will try to implement it in future short versions of the paper (whatever the outcome at NeurIPS).
---
Rebuttal Comment 1.1:
Comment: I thank the authors for addressing my concerns. After reviewing the rebuttals and feedback from other reviewers, I am convinced that the model studied holds inherent theoretical value and offers potential practical utility. I trust the authors to address the presentation issues to the best of their ability. I have also decided to adjust my rating upward. | Rebuttal 1:
Rebuttal: We thank the reviewers for their time and thoughtful comments. We will incorporate all the editorial comments.
Please find the individual responses to the reviewers below. | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
On the Interplay between Social Welfare and Tractability of Equilibria | Accept (poster) | Summary: This paper considers a specific type of smooth game first introduced in [Roughgarden 2015]. The main result of this paper is that when the game has robust PoA lower bounded by 1-$\epsilon$ (defined by the concept of smoothness of the game) and all the players apply optimistic gradient descent algorithm (OGD), the best iterate over the learning process will achieve $(\sqrt{\epsilon}/\sqrt{\delta n}, \delta)$ weak Nash-equilibrium, meaning that there are $\delta$ fraction of the players having low NE gap. The authors also show that under certain condition of the game, CGD, a variant of OGD, performs better social welfare guarantees.
Strengths: - The paper considers a certain type of game with good rPoA guarantee and shows that classic OGD algorithm performs good results with provable guarantees.
- The general idea of the proof is intuitive and clear to me. The analysis also looks correct to me in general.
Weaknesses: - One issue is the motivation of the study and the novelty of the proposed analysis. Specifically, it is less clear why the class of game with $rPoA\geq 1-\epsilon$ is an interesting class of games. It seems to me that this is more about a technical assumption that makes the proof work. More concretely, with respect to the first result showing the convergence to Nash, the analysis follows the recent line of works about last-iterate convergence and constant individual regret bound in general-sum games using OGD/OFTRL, which show that lower bounded sum-of-regret leads to stability of the dynamic, which furthre leads to low NE gap guarantees. The rPoA condition basically guarantees that the sum-of-regret is lower bounded according to the smoothness condition of the game.
- Another related issue is the trackability of rPoA. As mentioned mentioned by the authors, while rPoA can be calculated within polynomial time if the game is explicitly represented, meaning that the game representation already takes exponential space with respect to the number of players, which may not be the case when the utility function for each player is define by certain concrete functions. Therefore, deciding whether rPoA is lower bounded or not is also computationally intractable when the number of players is large.
- The description of Observation 3.3 does not look very clear. Specifically, the equivalence should define a certain choice of $(\lambda, \mu)$ in the definition of smooth game and I do not find a proof for this observation.
- The notations used in this paper are not very clear.
- In section 3, many notations are with superscript $n$, which would be better if they are replaced by $*^{(n)}$ in order to distinguish with the exponent notation. Specifically, in line 194, $rPoA\geq 1-\epsilon^n$ is very misleading.
- In line 302, should $rPoA_G^{\epsilon}$ be replaced by $PoA_G^{\epsilon}$, since $rPoA$ is not defined with respect to NE.
- In line 324, the guarantee should be $rPoA_G\cdot OPT$. Specifically, in line below line 926, the RHS should also include $OPT$.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: - Can the authors explain more on the class of game with $rPoA\geq 1-\epsilon$? Is there a certain class of previously studied games or real applications that satisfy this condition?
- The obtained results for good NE gap strategy is with respect to the best-iterate. I wonder whether the results may also extend to the average-iterate case, which may lead to less computational complexity to derive the output strategy?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 2 fair
Limitations: See weakness and questions for details.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We are grateful to the reviewer for their feedback. Below we address the concerns.
*“Specifically, it is less clear why the class of game with $rPoA≥1−\epsilon$ is an interesting class of games. [...] Is there a certain class of previously studied games or real applications that satisfy this condition?”*
We believe that the main application of this condition is on large games. Indeed, two rather recent papers [37,22] show that $rPoA \to 1$ as $n \to \infty$ under very broad conditions, encompassing Walrasian auctions and Fisher markets, as well as general combinatorial auctions under probabilistic demand. Those are some of the most well-studied settings in algorithmic game theory, so the class of games in which $rPoA \geq 1 - \epsilon$ certainly contains previously studied games in the literature. Another application of this condition we provide in our paper is on games with bounded influence (see Corollary 3.4), which is another class of games with rich history and real applications.
Furthermore, beyond large games and games with bounded influence, another class that satisfies this property are zero-sum games for which the Minty property holds (Observation 3.3), which is a classical condition in optimization. We believe that this connection is interesting as it connects two separate lines of work, and it allows us to use insights from the line of work on smoothness to better understand the Minty property, and vice versa; several such implications are highlighted in Lines 70-85.
*“Another related issue is the trackability of rPoA. As mentioned mentioned by the authors, while rPoA can be calculated within polynomial time if the game is explicitly represented, meaning that the game representation already takes exponential space with respect to the number of players, which may not be the case when the utility function for each player is define by certain concrete functions. Therefore, deciding whether rPoA is lower bounded or not is also computationally intractable when the number of players is large.”*
First, as we explained in our response above, there are many interesting classes of games where we know that $rPoA \to 1$, without requiring one to exactly compute rPoA.
Second, and more importantly, the tractability of $rPoA$ is by no means necessary to our approach. Indeed, notice that executing the algorithm—which is of course computationally efficient even in multi-player games with a succinct representation—gives itself a sufficient and computationally efficient condition, without having to compute $rPoA$. Specifically, by running the algorithm and computing the (minimum) best response gap (which can be computed easily), there are two possibilities:
i) if the best response gap is large, then our theory implies that $rPoA$ must be far from 1, in which case our theory is not applicable;
ii) if the best response gap is small, we are close to Nash equilibrium, which is precisely what we were looking for in the first place.
So computing $rPoA$ is not needed at all for our approach.
*“The description of Observation 3.3 does not look very clear. Specifically, the equivalence should define a certain choice of $(\lambda,\mu)$ in the definition of smooth game “*
This observation follows directly from the definitions, which is why a proof has not been included. Notice that a game being smooth by definition means that there exists a legitimate pair of finite parameters $(\lambda, \mu)$ that satisfies the smoothness property, which is why defining such a pair in Observation 3.3 is not needed. Indeed, the existence of any such pair suffices for the claimed equivalence.
*“In section 3, many notations are with superscript $n$. which would be better if they are replaced by...”*
We thank the reviewer for the suggestion. We will follow the reviewer’s recommendation in the revised version, as we see how our current notation can cause confusion.
*“In line 302, should $rPoA_\mathcal{G}^\epsilon$ be replaced with $PoA_\mathcal{G}^\epsilon$ ?”*
Indeed, this is a typo, thank you for pointing it out.
*“In line 324, the guarantee should be [...]”*
We stated Theorem 4.2 under the (normalizing) assumption that $OPT_{\mathcal{G}} = 1$ (as noted in Line 915), but we will follow the reviewer’s suggestion as we see how our current choice causes confusion.
*“The obtained results for good NE gap strategy is with respect to the best-iterate. I wonder whether the results may also extend to the average-iterate case, which may lead to less computational complexity to derive the output strategy?”*
Indeed, our main result can be extended to apply for an average iterate, not just the best iterate. Specifically, by selecting an iterate at random, there is a high probability that it will have a small equilibrium gap. We will make sure to point this out in the revised version.
---
Rebuttal Comment 1.1:
Comment: We thank again the reviewer for their time and the helpful feedback. Given that the discussion period soon comes at an end, we wanted to see if our response adequately addressed the concerns, and if the reviewer has any further questions for us. | Summary: This paper studies the convergence properties of no-regret dynamics in $(\lambda,\mu)$-smooth games. More precisely the authors show that for any $(\lambda,\mu)$-smooth game at which the bound of $\frac{\lambda}{1+\mu}$ converges to $1$ as the number of agents grows, the resulting Online Gradient Descent dynamics converges in polynomial time to an $(\epsilon,\delta)$-weak Nash Equilibrium. In an $(\epsilon,\delta)$-weak NE, $(1-\delta)$-fraction of the agents cannot increase their payoff by more that $\epsilon$ by deviating to another mixed strategy. The latter implies that if $\frac{\lambda}{1-\mu} = 1$ then the resulting OGD dynamics converges to an $\epsilon$-MNE in $O(1/\epsilon^2)$ steps. The latter reveals an interesting phase transition on the computational complexity of equilibrium computation of general multi-player games with PoA = 1 and the smaller class of $(\lambda,\mu)$-smooth games with $\frac{\lambda}{1-\mu} = 1$ (this naturally implies PoA = 1).
Motivated by the work of Roughgarden et al. establishing that no-regret dynamics in $(\lambda,\mu)$-smooth games guarantees $\frac{\lambda}{1-\mu}$ fraction of the optimal payoff, the authors show that the latter result can be improved for a specific class of $(\lambda,\mu)$-smooth satisfying an additional condition (Condition 4). More precisely, the authors show that if all agents adopt Clairvoyant Gradient Descent then the time-average joint probability distribution converges to a CCE while there exists an iteration at which the respective mixed strategy profile attains strictly more than $\frac{\lambda}{1+\mu}$ fraction of the optimal payoff.
Strengths: I believe that the convergence results to an $(\epsilon,\delta)$-weak NE in $(\lambda,\mu)$-smooth games is a solid contribution. I also find surprising the phase transition on the computational complexity of equilibrium computation of games with PoA = 1 (that are PPAD hard) and the respective complexity of $(\lambda,\mu)$-smooth games with $\frac{\lambda}{1+\mu} = 1$. Also I find the second presented result, improving upon the $\frac{\lambda}{1+\mu}$ fraction of the optimal social welfare, interesting. I overall believe that this is a good paper aligned with the interest of the online learning/game theory audience of NeuRIPS.
Weaknesses: In my opinion the only weakness of the paper concerns the comparison of the second result with the respective previous result of Roughgarden et al. As far as I understand, Roughgarden et al. show that the time-average payoff attained by any no-regret dynamics (in an $(\lambda,\mu)$-smooth game) is at least $\frac{\lambda}{1+ \mu}$ fraction of the optimal social welfare. However Theorem~4.2 guarantees social welfare strictly more than $\frac{\lambda}{1+ \mu}$ fraction the optimal one only at a specific iteration.
That being said I still consider the result interesting. However in case I am not missing something, an additionally discussion is needed so as to fairly compare the two results.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Your second results refers to the $best-iterate$ while the result of Roughgarden et al. refets to the time-average payoff. Could you elaborate more on how the two results compare?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We are grateful to the reviewer for their feedback.
*“Theorem 4.2 guarantees social welfare strictly more than $\frac{\lambda}{1 + \mu}$ fraction the optimal one only at a specific iteration.”*
While we stated Item 2 of Theorem 4.2 for a single iteration, it is direct to extend our proof so that the improvement in fact holds for almost all iterates (say 99% of them), not just a single one. More precisely, we can state Item 2 of Theorem 4.2 so that a $1- \delta$ fraction of the iterations has an average social welfare strictly better than rPoA by an additive factor of $\delta \cdot \epsilon_0^2 C(\mathcal{G})$. We will highlight this stronger result in the revised version.
---
Rebuttal Comment 1.1:
Comment: Thank you for your response. I have read the other reviews and I am confident for my positive evaluation of the paper. | Summary: This paper discusses the convergence and welfare of learning algorithms in smooth games. The authors show that when approximate full efficiency can be guaranteed via a smoothness argument, Nash equilibria are approachable under a family of no-regret learning algorithms, thereby guaranteeing fast and decentralized computation. They also leverage this connection to obtain new convergence results in large games, as well as extensions to Bayesian mechanisms. The paper unifies recent works in two-player zero-sum games, illuminating an equivalence between smoothness and a well-studied condition in the optimization literature known as the Minty property. Finally, the authors establish that a family of no-regret learning dynamics outperforms the welfare predicted by the smoothness framework under a generic condition, while at the same time guaranteeing convergence to the set of coarse correlated equilibria.
Strengths: The strength of this paper lies in its contribution to the understanding of the convergence and welfare of learning algorithms in smooth games. The authors provide new insights into the relationship between efficiency and the behavior of a family of no-regret learning algorithms, and they also unify recent works in two-player zero-sum games.
Weaknesses: The paper does have quite a few notations, but it's understandable. I've got a few questions, which might also point out some areas in the paper that could be improved. I've put these questions and potential weaknesses together in the question section for easy reference. But, I think this is an overall well-written paper.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. Is there any counterexample that if rPOG does not converge to 1, then the decentralized algorithm (OGD) provided in this paper does not converge to Nash Equilibrium? or If Condition 4.1 does not hold, then still thm 4.2.1 hold?
2. What is the indication of Condition 4.1? Are there some examples that condition 4.1 holds?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The authors adequately addressed the limitation and potential negative societal impact on their work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We are grateful to the reviewer for their feedback.
*“Is there any counterexample that if $rPoA$ does not converge to 1, then the decentralized algorithm (OGD) provided in this paper does not converge to Nash Equilibrium?”*
Yes. As we point out in Lines 272-274, there is a bimatrix game (see Proposition A.10 in Appendix A.3) in which $rPoA = 0.125$ and OGD does not converge to a Nash equilibrium.
“Is there any counterexample…If Condition 4.1 does not hold, then still thm 4.2.1 hold”
We do not know of an example where Condition 4.1 does not hold but the conclusion of Thm 4.2 applies, but we imagine that it should be the case that such examples exist.
“Are there some examples that condition 4.1 holds?”
Condition 4.1 is quite generic, so we expect it to hold in general. More specifically, as we explain in the paper, there are two sufficient assumptions to satisfy Condition 4.1. The first one is that $PoA > rPoA$; given that $PoA$ gives the worst-case quality over a smaller set compared to $rPoA$, we expect this condition to hold. More concretely, Figure 1 in Appendix A.5 shows that such is the case in random normal-form games, while in the revised version we will also highlight that the condition $PoA > rPoA$ is satisfied for all benchmark extensive-form games studied in the literature. The second assumption is a mild continuity condition regarding the behavior of $PoA$ under approximate Nash equilibria; as noted by Roughgarden [81], this is standard in this line of work for otherwise the entire analysis of $PoA$ becomes problematic (in that the conclusions of the theory are not robust to arbitrarily small perturbations). So Condition 4.1 is very generic. We finally point out that Theorem A.18 provides a more general result that does not rely on Condition 4.1.
---
Rebuttal 2:
Title: Thank you
Comment: Thank you for your response. | Summary: This work studies the connections between the efficiency of Nash equilibria, as measured for example by social welfare, and tractability of computing these equilibria through efficient no regret learning algorithms. The authors provide the key insight that the smoothness framework introduced by Roughgarden, that is typically used to analyze price of anarchy, can be also leveraged to analyze the efficient convergence of optimistic gradient descent (OGD) to Nash equilibria. They use this insight to prove that if as the number of players increases, the robust price of anarchy goes to 1 without increasing the games Lipschitz constant too quickly, then OGD converges to weak Nash equilibria. This finding unifies previous findings on zero sum games and is connected to existing work on the Minty property. Even beyond OGD, the authors study the connections between convergence to CCE and social welfare for clairvoyant GD (CGD). They show for the first time that convergence to CCE and outperforming restricted price of anarchy can be combined through an efficient algorithm.
Strengths: Overall I believe that the paper makes a very strong and original contribution. To the best of my knowledge, connections between efficiency of equilibria and tractability have not received a lot of attention. This is evidenced by the fact that relatively simple observations like 3.3 are commonly overlooked. In addition to the novelty of this works topic, the technical results are very extensive and non trivial.
Weaknesses: The paragraph from 179 to 190 could benefit from expansion. In the common setting where we analyze a fixed game, one can always normalize the utilities to get a Lipschitz constant of 1. I have a sense that such a normalization would affect the value of $\epsilon$ in the general case thus we cannot always get $\gamma$ to be small for free. I am not sure though. Adding more intuition would help.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: See above for a question/suggestion.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 4 excellent
Limitations: No limitations applicable.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We are grateful to the reviewer for their feedback.
*“The paragraph from 179 to 190 could benefit from expansion. In the common setting where we analyze a fixed game, one can always normalize the utilities to get a Lipschitz constant of 1”*
It is indeed the case that we can appropriately normalize the utilities so that the Lipschitz constant is 1, but here we operate in the usual setting where a bound on the utilities is imposed on the $\ell_\infty$ norm; this is why the Lipschitz constant with respect to the $\ell_2$ norm can be as large as $\Theta(\sqrt{n})$. We will clarify this in the revised version.
---
Rebuttal Comment 1.1:
Title: Thanks for the clarification
Comment: Thank you for the clarification! | null | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: This work studies the connection between efficiency and computation tractability of equilibrium in smooth games. Its major finding is that optimistic mirror descent reaches a weak Nash equilibrium for large games satisfying the property that asymptotically smoothness can guarantee efficiency of equilibria. This result establishes the connection between smoothness arguments and the approachability of no-regret learning algorithms to Nash equilibria.
Strengths: This work draws new insights on an interesting angle connecting two fundamental subjects in algorithmic game theory. Robust price of anarchy characterizes the fraction of optimal social welfare that smoothness argument can guarantee for every Nash equilibria. It turns out from this paper that robust PoA $\to 1$ also ensures the convergence of no-regret dynamics, which is non-trivial. The results also recover existing theories of the covergence of OMD in two-player zero-sum games.
Weaknesses: The rates (e.g. $n^{-\alpha/3}$) are dependent on the number of players instead of the number of iterations, which is different from what we commonly see/expect. I am not sure how useful the rates are, for example, in large games, still a $n^{2/3}$ number of players may yield bad responses.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: What are some natural examples where the conditions in Thm 3.1 are satisfied and non-trivial rates are obtained?
Is there direct generalization to mean-field games?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The authors adequately addressed the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We are grateful to the reviewer for their feedback.
*“What are some natural examples where the conditions in Thm 3.1 are satisfied and non-trivial rates are obtained?”*
The most simple example is when $rPoA = 1$, in which case Thm 3.1 (in particular see the more general version of Theorem A.2) yields a $1/\sqrt{T}$-Nash equilibrium after $O(T)$ repetitions. As we point out in Observation 3.3, this already captures the well-studied Minty property, which is satisfied, for example, in two-player zero-sum games.
Furthermore, in the regime where $rPoA \to 1$, Corollary 3.4 shows that the conditions of Thm 3.1 are satisfied in the natural class of games with bounded influence, which includes voting games; see reference [51] for further applications.
*“In large games, still a $n^{2/3}$ number of players may yield bad responses.”*
This is true for a certain regime of $\gamma^n$, but as we show in Corollary A.3 our approach can also establish convergence to a Nash equilibrium--a point where *all* players are best responding.
*“Is there direct generalization to mean-field games?”*
This is an interesting question. To apply our results in the mean-field regime the key difficulty seems to be showing that the robust price of anarchy ($rPoA$) approaches $1$. There is some work in the literature that identifies natural classes of mean-field games for which $PoA \to 1$ (e.g., see Carmona et al. (Price of Anarchy for Mean Field Games)), so it is plausible that the same applies to $rPoA$ as well, but formalizing this would require further work.
---
Rebuttal Comment 1.1:
Comment: Thank you. After reading the rebuttal and other reviews I decide to keep my positive evaluation. | null | null | null | null | null | null |
PDP: Parameter-free Differentiable Pruning is All You Need | Accept (poster) | Summary: This paper proposes a parameter-free differentiable pruning, which uses a dynamic function of weights during training to generate soft pruning masks. It can generalize well on random/structured/channel pruning on both vision and NLP tasks. It achieves superior pruning results.
Strengths: 1. The proposed PDP approach is novel and interesting, and can generalize well on random/structured/channel pruning.
2. The analysis in Section 3.1 is interesting and provides valuable insights.
3. The proposed PDP approach can be faster than existing differentiable pruning approaches, with smaller pruning accuracy loss.
Weaknesses: 1. Instead of learning masks, which would introduce extra trainable parameters, PDP lets the weights of the network adjust themselves and generate soft masks. This approach may seem counterintuitive, as most weights are obtained and converged through expensive training processes. It may be more effective to adjust and learn masks rather than adjusting network weights to generate masks.
2. The paper only provides a simple explanation for why PDP is more effective. A theoretical proof may be more convincing.
3. Although the method is effective on the CNN and BERT models evaluated in this paper, its effectiveness on other models, such as larger vision transformers and LLMs, is still unclear.
4. An important related work[1] is missing.
[1]Xia, Mengzhou, Zexuan Zhong, and Danqi Chen. "Structured pruning learns compact and accurate models." ACL (2022).
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: see the weakness section.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The main limitation of the paper is that, aside from empirical experimental results, there is no theoretical proof or explanation as to why it is superior to existing non-parameter-free approaches. The solution presented in the paper raises doubts about its effectiveness on larger neural networks, such as ViT and LLMs.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Q0: This approach may seem counterintuitive, as most weights are obtained and converged through expensive training processes. This approach may seem counterintuitive, as most weights are obtained and converged through expensive training processes. It may be more effective to adjust and learn masks rather than adjusting network weights to generate masks.**
Thank you for the opportunity to reiterate our key contribution. Our paper in nutshell demonstrates that adjusting weights to generate masks is possible and cheaper than the conventional approach (i.e., learn/adjust masks based on the weight values), which is our key contribution. Although it may sound counterintuitive, our comprehensive experimental results support our contributions.
**Q1: The paper only provides a simple explanation for why PDP is more effective. A theoretical proof may be more convincing.**
Thank you for your feedback. Here, we present more explanation on PDP.
For a given gradient on $ \Delta \hat w$, the gradient on $w$ with magnitude-based pruning can be computed as follows:
$ \Delta w_{mag} = \Delta \hat w \cdot m(w) $
In PDP case, the gradient on $w$ is
$ \Delta w_{pdp} = \Delta w_{mag} + 2 \Delta \hat w \cdot w^2 m(w)(1-m(w)) $
From above, we can observe the followings:
* $ \Delta w_{pdp} \ge \Delta w_{mag}$: This implies PDP allows the weight to learn more from each iteration (larger gradient toward loss-decreasing direction), proportionally to its proximity to the $t$.
* $ \Delta w_{pdp}$ is max when $m(w) \approx 0.5$: This means PDP encourages the weight on the pruning boundary (around $t$) to learn more aggressively, such that it can converge to the better spot faster.
We conjecture these features enable PDP to make a better pruning decision for the weight on the boundary, and provide strong "second" chances to recover from the undesirable weight updates.
**Q2: Although the method is effective on the CNN and BERT models evaluated in this paper, its effectiveness on other models, such as larger vision transformers and LLMs, is still unclear.**
Thank you for the comment. Due to time and resource constraints, we could not experiment with a larger LLM. Yet, the GPT2 results in Table 3 indicate that PDP can offer a better trade-off between LLM training cost and LLM accuracy. Hope this could address some of your concerns.
**Q3: An important related work[1] is missing.**
Thanks for the new reference. We will cite the paper in the updated draft.
---
Rebuttal Comment 1.1:
Title: Response to authors
Comment: Thanks for your further explanation, which answers my questions. I will increase the score. | Summary: This paper deals with the pruning algorithm PDP on DNN, the main innovation is to generate a differentiable mask based on weights using a designed threshold t, and the softmax function. This mask can ensure the gradient propagation during forward and backward process while training, reducing the accuracy loss.
Strengths: 1. This paper is well written and easy to follow.
2. The proposed method is novel and provides an efficient way to learn pruning masks without incurring heavy parameter burden.
3. The performance increase remains constant on various sparse ratios and models.
Weaknesses: 1. The employment of different masks can impact training process performance, yet a higher SAD[1] may result in a significant decline in accuracy. Given that the masks are still changeable in the later stages of PDP training, how can it provide high accuracy?
[1] Learning N: M fine-grained structured sparse neural networks from scratch. In ICLR, 2020.
2. The description of PDP algorithm in section 3.2 may not be completely unambiguous. For the calculation of the conditional point t, the meaning of the paper should be to take the median of $r*n(W)$ and $(1-r)*n(W)$, but figure 3 (a) shows that $t=0.5\{min(W_h)+max(W_l)\}$, which seems to be wrong.
3. In Table 6, the performance of LNM seems to be quite different from that reported in the original paper. Please explain this.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Please see the weakness part.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: No limitations discussed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Q0: The employment of different masks can impact training process performance, yet a higher SAD[1] may result in a significant decline in accuracy. Given that the masks are still changeable in the later stages of PDP training, how can it provide high accuracy?**
Thank you for the question. Sparse Architecture Divergence (SAD) from [1] defines the changes in the pruning mask during training. The insight behind in SAD is that lower SAD will incur less disruption to the training process, potentially improving the model quality. We first like to point out that our PDP pruning mask is soft and continuous during training, and only get binarized for validation/test/inference. According, in fact, our PDP can make SAD between iterations smaller than the hard/binarized pruning cases.
As an illustration, for the hard masks, the following scenario leads to total SAD 2 and max SAD 1.
| | iter0 | iter1 | iter2 | iter3 |
| --- | --- | --- | --- | --- |
| Mask | 0 | 1 | 1 | 0 |
| SAD | - | 1 | 0 | 1 |
For PDP case with $t$=0.5, the same scenario could be the following, where total SAD is 0.4 and max SAD 0.2.
| | iter0 | iter1 | iter2 | iter3 |
| --- | --- | --- | --- | --- |
| training-mask | 0.4 | 0.6 | 0.6 | 0.4 |
| SAD | - | 0.2 | 0 | 0.2 |
which implies PDP slowly tries the same pruning scenario as the above but very gradually using the soft mask.
Hence, PDP will offer much better model stability during training in terms of SAD
thanks to our proposed differentiable soft mask, and let each weight converge into the right pruning choice at the end. When the network is finalized with a binary mask at the end of the training, our PDP can yield a higher-quality model as shown in Table 6 (i.e. comparison against LNM[1] (which proposed SAD).
**Q1: The description of PDP algorithm in section 3.2 may not be completely unambiguous. For the calculation of the conditional point t, the meaning of the paper should be to take the median of and , but figure 3 (a) shows that , which seems to be wrong.**
Appreciate the opportunity to clarify the definition of the conditional point $t$. $t$ is essentially a weight value that would have the exact 0.5 mask value. Assuming every weight value in a layer is unique, $W_l \cup W_h$ includes all the weights in the layer. Therefore in this case, $t$ is the median of max($W_l$) and min($W_h$), and also is essentially the mean of the two numbers as in Fig. 3 (b). Note that $r \cdot n(W)$ and $(1-r) \cdot n(W)$ are to find out $W_l$ and $W_h$ using TopK, so practically the same as $|W_h|$ and $|W_l|$ (i.e., the number of weights).
The example on the right side in Fig. 4 (a) also illustrates the process of finding $t=0.08$ by taking the median (or mid-point) of 0.09 and 0.07.
**Q2: In Table 6, the performance of LNM seems to be quite different from that reported in the original paper. Please explain this.**
Thank you for the opportunity to clarify the LNM results. We believe it could be due to two reasons:
* It is because of different dense baseline models as starting points. For example of ResNet50, LNM used a dense model with 77.3% top-1 accuracy but this starting checkpoint is not publicly available. Hence, we used the public dense model (in torchvision) with 76.1% top-1 accuracy. Such difference in the dense model accuracy contributes to the differences in the accuracies from LNM.
* LNM is using a custom color augmentation according to the public repo. Hence, we removed this feature to keep the data augmentation normalized. | Summary: The paper focuses on DNN pruning and proposes a novel approach using a soft mask during training. The soft mask is designed to encourage weights around the pruning threshold to actively switch their states, aiding in the recovery of pruned weights. This method is simple, efficient, and introduces no additional parameters. Experimental evaluations of the proposed approach have been conducted on ResNets, MobileNets, and BERT. It also explores different pruning settings such as N:M pruning, random pruning and channel pruning.
Strengths: * The paper proposes a soft mask during training, which smooths the participation of weights around the pruning threshold in the neural network. This can give more chances for weights around pruning threshold to change their states, especially help pruned weights actively recover themselves.
* The compared methods are enough in experiments.
* The paper discusses relevant works and provides a comprehensive analysis.
Weaknesses: * Method:
* Lack of differentiable analysis: The title and content of the paper stress the differentiable pruning, however, method part (3.2) does not give differentiable analysis.
* Insufficient explanation of the design of "t" and calculation of "m(w)": Why is “t” in PDP training flow designed in this way? The paper lacks clear explanations about the design of “t” and calculation of “m(w)”.
* Lack of analysis and ablation study on the efficacy of the proposed soft mask: I find that the paper adopts progressive pruning as illustrated in the algorithm chart in supplementary material. As progressive pruning is an effective technique for final accuracy, I am doubtful about the real effect of the proposed method. To better understand the effectiveness of the proposed soft mask, we need more analysis and ablation studies.
* Experiments:
* More experiments about Transformer architectures are preferred. The paper provides BERT models experiments on MNLI dataset. Experiments with ViT models or BERT models on the whole GLUE benchmark will be preferred.
* In the main body, there are not enough sparsity ratio selections. Table 4 provides only one sparsity ratio for each model. More sparsity ratios, ranging from low to high are necessary to understand the effect of the proposed method.
* Writing:
* Redundancy between Figure 5 and Table 4: It seems that Figure 5 and Table 4 convey the same information. What is the difference between Figure 5 and Table 4 or why illustrate the same information twice? I think each table or figure should contain specific information, especially in the main body.
* Improper organization of the paper: The paper first illustrates the benefits of PDP over existing pruning approaches, then introduces the PDP algorithm, which can be confusing. Before readers know the method clearly, it would be difficult to understand its benefits. The better solution would be to change their order or to reconsider the effects of section 3.1, such as serving it as a motivation section.
Technical Quality: 3 good
Clarity: 1 poor
Questions for Authors: Please see the method and experiment points in the weakness part. If my concerns are addressed, I would like to increase my score.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 1 poor
Contribution: 3 good
Limitations: There is no potential negative societal impact of their work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Q0: Insufficient explanation of the design of "t" and calculation of "m(w)": Why is “t” in PDP training flow designed in this way? The paper lacks clear explanations about the design of “t” and calculation of “m(w)”.**
Thank you for the chance to clarify the important notation in our work.
$t$ is the weight value which will have exactly 0.5 chance to be pruned and 0.5 chance to be not-pruned of a layer. And, we capture $t$ from the weight distribution of a layer as in Fig. 3 (a). Consequently, $t$ serves as a conditional point to decide whether a weight will have a soft mask $m(w)$ larger than 0.5.
The way we compute $m(w)$ is differentiable using softmax, as shown in Fig. 3 (a). More specifically, we compute $m(w)$ as follows:
$m(w) = \frac{e^{\frac{w^2}{\tau}}}{e^{\frac{w^2}{\tau}}+e^{\frac{t^2}{\tau}}} $
Thus, when $w$ happens to be the same as $t$, $m(w)$ is 0.5. If $w$ is relatively larger than $t$, then $m(w)$ is also larger than 0.5 as in Fig 3. (b).
**Q1:The title and content of the paper stress the differentiable pruning, however, method part (3.2) does not give differentiable analysis.**
Thank you for the opportunity to explain more details. Without loss of generality, let's assume $\tau = 1$ (the temperature of softmax). Then, the masked weight $\hat w$ in PDP is
$\hat w = m(w) \cdot w = \frac{e^{w^2}}{e^{w^2}+e^{t^2}} \cdot w $
Then, for a given gradient on $\hat w, \frac{\partial L}{\partial \hat w}$,
$\frac{\partial L}{\partial w} = \frac{\partial L}{\partial \hat w} m(w) + 2\frac{\partial L}{\partial \hat w} w^2 m(w)(1-m(w)) $
The 1st term is a typical gradient in mask-based pruning, and the 2nd term is an additional gradient with a positive factor. Then,
* if $m(w) \approx 0$ or hard-prune, $\frac{\partial L}{\partial w} = 0$ likewise other pruning algorithms.
* if $m(w) \approx 1$ or hard-not-to-prune, $\frac{\partial L}{\partial w} = \frac{\partial L}{\partial \hat w}$ likewise other pruning algorithms.
* when $m(w) \approx 0.5$ (i.e., pruning decision is unclear), $m(w)(1-m(w))$ is maximized, boosting the $w$ movement.
Hence, PDP will accelerate the SGD updates for the weights near the pruning boundary ($t$) toward a loss-decreasing direction (which means more learning from each iteration), encouraging the weights to settle with the proper pruning decision at the end. Even if the current gradient is not globally beneficial for the task, many "second" chances will eventually help recover the damages in an accelerated manner. We will add this differentiable analysis to the final draft.
**Q2: Lack of analysis and ablation study on the efficacy of the proposed soft mask**
Thank you for the feedback. Gradual pruning is strong and popular; in fact, GMP, STR, ACDC, GradNet, and OptG all use some form of gradual pruning. To analyze the benefit of the proposed soft mask in PDP as ablation study, we re-run PDP for resnet50 and mobilenet_v1 training in the exactly same configuration as in Section 4 **without** the proposed soft mask.
| | ResNet50 | MobileNet_v1 |
| --- | --- | --- |
| PDP w/o softmask | 73.1 | 66.8 |
| PDP | 74.7 | 68.2 |
| $\Delta$| 1.6 | 1.4 |
The result supports the efficacy of the proposed soft mask in PDP: without it, the ImageNet top1 accuracy drops by 1.6% and 1.4%, respectively.
**Q3: More experiments about Transformer architectures are preferred. The paper provides BERT models experiments on MNLI dataset. Experiments with ViT models or BERT models on the whole GLUE benchmark will be preferred.**
Thank you for suggesting helpful experiments. While we couldn't complete the whole GLUE benchmark due to the limited time and resources, we were able to perform additional experiments with the QQP (Quora Question Pairs2) benchmark which is
* The 2nd largest dataset in GLUE (363,846 samples), following MNLI (392,702)
* MVP reported only MNLI and QQP (two largest) out of the GLUE benchmarks.
* MVP and POFA reported the results on QQP, thus making comparison possible.
We used the same hyper-params as in Section 4 and targeted 90% sparsity, and obtained the following results (including MNLI results from the current submission).
| | OptG |MVP | POFA | PDP|
| --- | --- | -- | --- | --|
| MNLI-m | 78.5 |81.2 | 81.5 | 83.1|
| MNLI-mm | 78.3 |81.8 | 82.4 | 83.0|
| QQP-acc | 89.8 | 90.2 | 90.9 | 90.9 |
| QQP-f1 | 86.2 |86.8 | 87.7 | 87.7 |
The result shows that PDP can offer the state-of-the-art 90.9% accuracy for QQP which is already very close to a dense model accuracy of ~91%. POFA showed the same quality results but using a more complex training flow.
**Q4: In the main body, there are not enough sparsity ratio selections. Table 4 provides only one sparsity ratio for each model. More sparsity ratios, ranging from low to high are necessary to understand the effect of the proposed method.**
Thank for the suggestion. In Tables 10/11 of Appendix, we have MobileNet-v1, ResNet18, and BERT results on the sparsity targets ranging from 50% to 80%. We will mention these additional results clearly in Section 4.
**Q5: Redundancy between Figure 5 and Table 4**
Thank you for the review. Indeed, some redundant information exists between Fig. 5 and Table 4, because they are complementary in the sense that one for high-level abstraction and the other for low-level details. The numbers in the Table couldn't abstract out the complex trade-off among accuracy-latency-cost in an intuitive manner. Hence, we added Fig. 5 with key numbers to visualize the trade-off and help readers on this complex matter.
We agree that the space in the main body can be better utilized for new information/results (especially ones from this rebuttal), and we accommodate your feedback in the final draft.
**Q6: The better solution would be to change their order or to reconsider the effects of section 3.1**
Thank you for the feedback. We will reorder as suggested and polish the write-up in the updated draft.
---
Rebuttal Comment 1.1:
Title: Post-Rebuttal
Comment: Thanks for the detailed response.
I appreciate the author's explanation about Q1 and experiments. Thus I would increase the score to borderline accept. But I suggest the author to improve the writing and give more explanation like the response of Q1 in the paper. | Summary: This paper describes a novel pruning algorithm named parameter-free differentiable pruning (PDP). The core idea is to generate soft pruning masks (i.e., mask values are not finalized until the final iteration of fine-tuning) using a parameter-free, differentiable dynamic function of the weights of the network. This approach eliminates the need to train additional mask parameters (and related hyper-parameter tuning) and appears to be easy to integrate into existing training pipelines. PDP is evaluated on networks drawn from vision and NLP, and on both unstructured and structured N:M sparsity patterns. The paper reports both accuracy and MAC improvements and provides a comparison of PDP to other SOTA pruning algorithms.
Strengths: * The paper is fairly well-written, but could use some reorganization to help with the overall flow.
* The paper targets the relevant and important problem of structured sparsification of large DNNs to improve inference efficiency.
* While soft and progressive pruning are well-known in the literature, its parameter-free nature and the specific mask computation make PDP unique.
* Evaluation methodology is sound, with relevant comparisons to other SOTA approaches, and results being reported on various types of networks (CNNs, attention-based, etc.) across disparate domains.
Weaknesses: Measuring inference efficiency in MACs can often be misleading. For instance, unstructured pruning can reduce MACs by getting rid of individual ineffectual computations, but this is difficult to realize in modern parallel hardware such as GPUs and TPUs. Reporting at least some data points with actual runtime numbers would be useful.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: * What is the motivation for selecting the per-layer sparsity value (r) after a few epochs of training? How would a user select the right number of epochs for a given network, sparsity pattern and global sparsity degree?
* Readability suggestion: consider moving Section 3.2 before 3.1.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Limitations have been adequately addressed in the paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Q0: Measuring inference efficiency in MACs can often be misleading. For instance, unstructured pruning can reduce MACs by getting rid of individual ineffectual computations, but this is difficult to realize in modern parallel hardware such as GPUs and TPUs. Reporting at least some data points with actual runtime numbers would be useful.**
Thank you for your feedback on MAC. We agree that MAC is rather a theoretical metric, not a measured metric, but it still captures the speedup benefit from pruning to some degree. GPUs don't support unstructured sparsity, but there exist modern smartphones where ML accelerators natively support unstructured sparsity (unfortunately, the link with details cannot be provided per the NeurIPS rebuttal policy). We used a 2-year old phone with the latest OS update and obtained the following latency measurements in msec.
| | ResNet50 | MobileNet_v2 |
| --- | --- | --- |
| Dense | 2.71 | 0.95 |
| PDP | 1.39 | 0.75 |
Due to other system overheads, the end-to-end latency reduction is not as significant as MAC saving, yet still shows the potential benefits of unstructured sparsity on a modern device. We hope these data points would address the reviewer's concern .
**Q1: What is the motivation for selecting the per-layer sparsity value (r) after a few epochs of training? How would a user select the right number of epochs for a given network, sparsity pattern and global sparsity degree?**
Thank you for the question. $r$ is selected after a few epochs to avoid pruning decisions being dominated by the initial weight values (which can be simply random). Choosing the right number of epochs for a given network depends on various factors the reviewer has already mentioned. Hence, this is a part of hyper-parameter tuning. In our experiments, the following guidelines worked best (and this is how we selected it).
* passes the warm-up epochs
* hits consistently over the half of the accuracy upper-bound (which is 50% for classifications) for 5 epochs.
**Q2: Readability suggestion: consider moving Section 3.2 before 3.1.**
Thank you for the suggestion. We will accommodate in the updated draft.
---
Rebuttal Comment 1.1:
Comment: Thank you for responding to my questions. My score remains the same. | Rebuttal 1:
Rebuttal: We like to thank the reviewers and ACs for the help and feedback. The highlight of our rebuttal includes the following.
New experimental results:
- Peak memory consumption is measured with PDP and OptG on a small GPT2 model.
- PDP result with MobileNet-v3 and ImageNet1k is added.
- Latency benefit with unstructured sparsity is measured on a modern smartphone.
- The efficacy of the proposed soft mask is studied with ResNet50 and MobileNet-v1 on ImageNet1k.
- PDP and other techniques have been experimented with the QQR dataset of GLUE benchmark.
New analysis:
- A theoretical aspect of PDP is presented.
- PDP is explained in the context of SAD. | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: The paper introduces a new DNN pruning scheme called Parameter-free Differentiable Pruning (PDP), which is an efficient and effective train-time pruning method that offers state-of-the-art qualities in model size, accuracy, and training cost. Unlike existing pruning approaches, PDP generates soft pruning masks for weights in a parameter-free manner, making it easy to apply to various vision and natural language tasks, DNN architectures, and structured pruning constraints. The paper presents experimental results on several benchmark datasets, including MobileNet-v1 and BERT, demonstrating that PDP achieves impressive results in terms of model size, accuracy, and training cost.
Strengths: - The paper introduced a PDP, a novel differentiable pruning methods that is parameter-free, which uses a dynamic function of weights to generate soft pruning masks for the weights.
- PDP can be applied tto structured and channel pruning, such as N:M pruning, where top-of-the-line GPUs support such configuration.
Weaknesses: - The PDP differentiable pruning does not introduce extra parameter, but it still need to generate (soft) mask from weight, which would induce extra activation maps, how is the memory consumption during differentiable pruning compare to other SoTAs?
- The paper seem provide results on MobileNet-v1/v2 but not on MobileNet-v3, can the author elaborate on why?
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: As shown in weaknesses part.
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 3 good
Contribution: 2 fair
Limitations: The paper did not any (Peak) memory consumption result of PDP during differentiable pruning.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Q0: The PDP differentiable pruning does not introduce extra parameter, but it still need to generate (soft) mask from weight, which would induce extra activation maps, how is the memory consumption during differentiable pruning compare to other SoTAs?**
Thank you for raising an important question regarding memory consumption. While fewer learnable parameters will reduce the model footprint and the communication cost (thus speeding up the training) in a multi-node setup, we agree that still extra activation maps will be induced. To understand the memory consumption better, we made a small GPT2 case with block_size=128 and n_layer=3 (a down-scale version of the one in Table 3). And, then GPU memory (in GB) is measured at three different spots for PDP and OptG (i.e., other SoTAs).
* **Spot0**: right after model/optimizer are created (the parameter + pytorch overheads)
* **Spot1**: right after forward, before backward (the peak memory consumption)
* **Spot2**: right after backward, before weight update (other parameter-related overheads)
| | Spot0 | Spot1 | Spot2 |
| --- | --- | --- | --- |
| OptG | 2.93 | 18.3 | 3.58 |
| PDP | 1.30 | 16.3 | 2.93 |
From the table, we can see the following:
* Not having extra mask parameters helps to reduce the model/optimizer overheads.
* However, the peak memory is dominated by the activations (both data and mask). And as pointed out by the reviewer, even a soft mask still requires activation space. Yet, about 10% saving in the peak memory is observed.
* After backward, the gradients for the learnable masks take a large memory space, which have been all-reduced in the multi-GPU setting.
We will add the above discussion in the final draft.
**Q1: The paper seem provide results on MobileNet-v1/v2 but not on MobileNet-v3, can the author elaborate on why?**
We appreciate your feedback on MobileNet-v3. We didn't experiment with MobileNet-v3, because the prior arts don't report the related results, thus making comparison with other SoTAs on MobileNet-v3 hard.
When we applied PDP with an 80% sparsity target for MobileNet-v3 (5.5M parameters), we achieved 71.5% top-1 ImageNet1K accuracy, which is 2.5\% down from the dense version. We will include this result in the final draft. | null | null | null | null | null | null |
Unlimiformer: Long-Range Transformers with Unlimited Length Input | Accept (poster) | Summary: This paper proposes to use k-nearest neighbor to extract nearest neighbor encoded tokens in pretrained encoder-decoder transformers. This helps in removing the bottleneck of limited input tokens thereby letting the dataset decide the input length. They show empirically that their proposed unlimiformer can extend to beyond 500K and work well with pretrained methods such as BART and Longformer.
Strengths: * The proposed method is a simple add on top of existing pre-trained models
* Equation 2 is a somewhat novel rewrite of QK.
* The authors performed extensive experiments showcasing that a) unlimformer works well on pretrained model, b) in many cases using the full dataset and then selecting the top-k tokens for cross attention results in increase in performance c) the computational increase is less than linear with increasing token size.
Weaknesses: * The idea of retrieving top-K most similar tokens to approximate the softmax has been explored in Reformer. They used LSH to retrieve top-K and that method should work even in the cross attention scenario. In their experiments they have experimented with 64K tokens. In the light of that the method proposed by this paper is not very dissimilar. This makes it hard to recommend accepting the paper. If the authors think that the proposed method is different please comment on it and it should have been compared in the experiments.
* Another thing that is not clear to me is why the forward pass of the *encoder* is not quadratically increasing in time complexity as the number of tokens increase, and why this was not included in Figure 4.
****
These questions were answered during the rebuttal phase
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: I will request the authors to please comment on the points raised in the weakness section. I will update my scores accordingly
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 3 good
Contribution: 2 fair
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for taking the time to review our paper! We were happy to read that you appreciated our main points: that Unlimiformer is simple, achieves significant improvements, and allows scaling of the input with less than linear increase in wall-clock time.
We think that your concerns are addressable within this discussion period. Please see our response below. We would love to further address additional questions during the discussion period if anything is unclear.
**"The idea of retrieving top-K most similar tokens to approximate the softmax has been explored in Reformer. They used LSH to retrieve top-K and that method should work even in the cross attention scenario. In their experiments they have experimented with 64K tokens. In the light of that the method proposed by this paper is not very dissimilar. This makes it hard to recommend accepting the paper. If the authors think that the proposed method is different please comment on it and it should have been compared in the experiments."**
We believe Unlimiformer differs substantially from Reformer. Reformer uses bucketed local attentions, where each token can only attend to tokens in the same bucket, and all tokens are attended to at all steps. This means that Reformer requires significant changes to the model architecture and **cannot be applied to existing pretrained models**. By contrast, Unlimiformer:
* Can be applied to existing pretrained models, since it does not change their architecture
* Can be applied without any additional training at all (including finetuning), if desired
* Allows different tokens to be in the same attention window at different steps/heads/layers. In Reformer, tokens can never attend outside of the hash bucket they have landed within.
We compared Unlimiformer to Memorizing Transformers (Wu et al., ICLR’2022), Longformer-Encoder-Decoder (LED; Beltagy et al, 2020) with the summarization-specific pretraining of PRIMERA (Xiao et al, 2022, ACL’2022), and SLED (Ivgy et al., TACL’2023), as we believe these are the most similar approaches. We also note that Longformer demonstrated improvements over Reformer, and we demonstrate empirical and conceptual improvements over Longformer-family models.
However, we do agree that clarifying the differences with existing approaches is quite important, so we will add some of this discussion to the revised draft.
**"Another thing that is not clear to me is why the forward pass of the encoder is not quadratically increasing in time complexity as the number of tokens increase, and why this was not included in Figure 4."**
Because we do not modify the encoder architecture in Unlimiformer, we encode all inputs in overlapping chunks (see section 2.1 in the paper and Figure 1 in the extra 1-page Figures PDF). This means that, as the number of tokens increases, the *number of forward passes of the encoder increases linearly*, with each forward pass having the same time complexity (quadratic over the small, fixed, context window size) as the base model. Specifically, if the model has a context window size of N, each pass of N tokens through the encoder has complexity N^2, where N is fixed. Encoding an input of size 10N with Unlimiformer requires passing 20 chunks of N tokens each through the encoder; encoding an input of size 100N requires passing 200 chunks through the encoder, which is linearly more expensive.
The time to encode the inputs is included in Figure 4– the trend is still sublinear, as the encoding is a minority of the computational cost of the entire model’s prediction, and the cost of decoding scales sublinearly. Overall, when the input is 100k tokens long, **Unlimiformer is only ~3.5x slower while processing 100x more input** than the base model. This includes the additional time to encode the input.
---
Rebuttal Comment 1.1:
Title: Response to Author Rebuttal
Comment: I thanks the authors for clarifying my questions and for the new results/analysis in the attached pdf. As a result I have moved up my initial score by two points.
One request/recommendation for the paper will be to clearly describe their chunking algorithm in the main paper and contrast it with related work
---
Reply to Comment 1.1.1:
Comment: Thank you for the response and for raising your score!
We will add additional clarification of the chunking. Thank you for your review, we feel it has helped us improve the paper. | Summary: The paper proposes Unlimiformer, a new method for increasing the context length of Transformers without any modification by using retrieval. The idea is simple, and immediately improves performance on benchmark tasks.
Strengths: The idea is simple, and the experiments show that augmenting Transformers with a retrieval system is useful. Inference cost increases sublinearly with the amount of input, which is helpful. Overall a solid idea and execution of the analysis.
Weaknesses: The major weakness is understanding whether the model can actually use all the context that it is being exposed to. It seems clear that using a nearest-neighbor based approach would limit the effectiveness of the context length to some fixed amount of input (once the KNN is "saturated" with nearby elements to the query it does not use further contexts).
Figure 3 seems to suggest that the model does not use data beyond 32K context, or at least has a lot of trouble doing so. The trend is downwards after 32K until the 350K point at the end, which could be an outlier. A natural extension would be to adapt the KNN search in some way as the amount of context increases.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Do you have more evidence that all the context is used as you increase the context?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for taking the time to review our paper! We were happy to read that you appreciated our main points: that Unlimiformer is simple, immediately improves performance, and does not require any modification to the architecture.
We think that all your questions are addressable within this discussion period. Please see our response below. We would love to further address additional questions during the discussion period if anything is unclear.
**"It seems clear that using a nearest-neighbor-based approach would limit the effectiveness of the context length to some fixed amount of input (once the KNN is "saturated" with nearby elements to the query it does not use further contexts)."**
In our preliminary experiments, we found that taking the 1024-nearest-neighbors covers more than 99% of the full attention mass, so we are not sure that retrieving more keys at the same time is really necessary.
Further, since each attention head in each decoder layer retrieves its own kNN keys, the context length is **not** limited to 1024 or so, since the decoder can attend to `1024 * num_layers * attention_heads` potentially different tokens in each decoding step.
To perform a fair evaluation, we fixed the number of nearest neighbors that we retrieved to be equal to the size of the vanilla model’s context size (e.g., 1024), such that both the base vanilla model and Unlimiformer can attend to the same *number* of keys.
**"Do you have more evidence that all the context is used as you increase the context?"**
To investigate, we plotted the frequency of retrieval for keys across the full decoding process. In book summarization, we found that the test-time-only Unlimiformer retrieved, on average, 43.5% of the encoded tokens at least once at test time; and the “alternating training” model retrieved 64.5% of the encoded tokens at least once, on average. These were the models with the least and most coverage of the input tokens, respectively. Also note that we retrieve **encoded, contextualized hidden states**, so even vectors that were not directly retrieved by the decoder impacted the final output.
We found no specific skew or pattern in the retrieved keys, and keys from the entire input were used by the model; for all models, the median *location* of a retrieved key was between 49.73% and 49.87% of the way through the input document. See also Figure 2 in the additional 1-page Figures PDF.
We cannot prove that “**all** the context is used”, since there are likely to be parts of each input that are not needed to be used (if they are irrelevant to the output). However, Unlimiformer has the ability to use any part of the context, by retrieving it using a computation that is equivalent to attending to it. The only difference between Unlimiformer and the baselines is the ability to look at the entire output, and Unlimiformer improves base models on long sequences either with or without training.
**"Figure 3 seems to suggest that the model does not use data beyond 32K context, or at least has a lot of trouble doing so"**
We believe that this is mostly a limitation of evaluating such long and information-heavy generations. Although there is a slight drop at 64k and 100k, the performance there is still significantly better than the vanilla BART base, and the general trend is that processing longer context leads to better outputs. Unlimiformer has no inherent preference for the “beginning” or the “end” of a book, and all keys can be equally retrieved.
**"A natural extension would be to adapt the KNN search in some way as the amount of context increases."**
Yes, but we see this as out of the scope of the current work. We believe that the most simple and natural idea is to retrieve the same number of keys as the base model, in order to perform a fair comparison to the base model.
Further, although an interesting potential improvement, setting $k$ to be a larger number than the vanilla model’s context window size would *require* training, because the decoder was not trained to attend to more keys than its context window size; thus, it will take away the advantage of using Unlimiformer training-free.
We thus leave this extension to future work. Thank you for this suggestion.
---
Rebuttal Comment 1.1:
Comment: Thank you for the extensive rebuttal and additional experiments. I will be raising my score to a 6.
---
Reply to Comment 1.1.1:
Comment: Thank you for responding and for increasing your score!
Please let us know if there are additional questions before the end of the discussion period. | Summary: The paper proposes a method to increase context lengths of encoder-decoder transformers to very long input sequences. The idea is to essentially encode all of the tokens of the entire input (on overlapping context-length chunks) and create an index of the input tokens. Just prior to decoding, k-nearest neighbor (kNN) encoded tokens are selected from the index to maximize $QK^T$ dot product of the cross-attention phase, between the encoder and decoder hidden states, during decoding. One of the key contributions in this paper is that they re-order the computation of the matrix products in the cross-attention ($QK^T$) so that creating a single index of the encoder tokens is sufficient, and kNN look-up can be performed efficiently to compute ($QK_{kNN}^T \approx QK_{best}^T$). Additionally, their method has the advantage of being applied to existing pre-trained architectures without the need for additional weights or tuning. It can be applied directly during inference at test time. Applying it during training gives an additional performance boost.
Strengths: * The idea is good and the experimental evidence is strong.
* The evaluation comparisons using “low-cost” tuning methods, i.e. applying this idea just during validation and testing is also clever.
* The proposed method has interesting advantages,
1. While the idea of using kNN is not new, it has been done nicely and efficiently,
2. The method can also be applied to existing pre-trained encoder-decoder models during inference directly.
3. There's further boost in performance when applied during training.
Weaknesses: No major weaknesses in the overall quality of the contribution. There are minor weaknesses which the authors can address e.g. the clarity of the writing can be improved, results in some tables are incomplete etc. I have framed those as questions below for the authors to address.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: 1. The Fig. 2 shows encoder encoding disjoint chunks, whereas text lines L73-74 say it is overlapping chunks. Can you clarify which of these is correct, and change the figure to represent that more accurately? The figure, overall, can be improved.
2. For the k nearest neighbor indexing and search, you mention the use of the FAISS library. Do you use approximate nearest neighbors, or the exact nearest neighbors? Considering most contexts are fairly small, if you’re not using exact nearest neighbors, can you provide a small comparison to using exact nearest neighbors, and share whether or not it affects results (and/or time).
3. In the Comparison to Wu et. al. section, it would be good to highlight that Memorizing Transformer paper applies their approach to decoder-only models. Your approach is applied to the encoder-decoder model.
4. Train chunked +test Unlimiformer (L137-141) requires more details. e.g., I don’t understand what treating each chunk as its own training example entails. Is this what you do: For an example that’s longer than the context length, do you chunk it into say C non-overlapping chunks, treat each (chunk, output) pair as a training example (to create C examples), but allow retrieval over the tokens of all the C chunks during decoding? (After reading retrieval training, *I guess there is actually no retrieval happening over training data* in the Train chunked +test Unlimiformer)
* 4.1 For Train chunked +test Unlimiformer, how does the duplication of some training examples because of the C (chunk, output) pairs figure into evaluation? Do you try to combine the C outputs to a single one, or take the average, or ignore the duplication and just treat them as more example pairs than what other methods would have? (Just clarify this detail in the paper)
5. Is Sec. 3.2 also doing fine-tuning? I was quite confused by the statement “We also consider training Unlimiformer directly”. Perhaps a different way to separate out section 3.1 and 3.2 is to say that in 3.1. you are applying kNN look up during model evaluation (while evaluating the models after each period of training), Vs 3.2 where you are applying kNN strategy on the training data examples (where the choice of the kNN values directly affects the model gradients during tuning).
6. During retrieval training, for the training examples that are longer than the context length, do you just have 1 copy of the (input, output) pair example in training – unlike the train chunked case where you could have multiple copies? Is this true even for training examples that are longer than 16k tokens?
7. In Table 4 when you say “Unlimiformer (this work)” do you mean the “Retrieval training” regime?
8. In Table 5 for BookSum, why isn’t there a line reporting results for BART-base Unlimiformer (Retrieval Training)?
9. Table 4 doesn’t directly include PRIMERA base training (standard finetuning) results, but worth repeating that result line from Table 3, so that this sentence (L206-207) is easier to follow: “in Table 4 highlight two important points: first, Unlimiformer+BARTbase performs better than the base PRIMERA across all metrics and datasets”
10. In the Comparison to SLED Ivgi et. al. The statements in L274-276
*“…This in practice limits SLED to only about 16k token-long inputs on a single GPU; in contrast, instead of attending to all input tokens, Unlimiformer attends only to the top-k input tokens for every attention head, and thus can process unlimited inputs in practice”*
Contradicts your statements in Sec. 3.2 Retrieval Training L149-150
*“When inputs are longer than 16k tokens, we truncated the input to 16k tokens at training time due to GPU memory requirements.”*
So, it seems like atleast in practice, in the experiments reported in this paper the input length was restricted to max 16k tokens due to practical limitations. Is that correct to say?
11. [Comment] In Table 2 (Sec 4.1. Datasets), for the distribution plots it might be worth adding a vertical line to indicate where the average length (avg. number of input tokens) falls in the distribution curve.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: Perhaps worth noting that this is not applied to decoder-only models.
Also worth noting the 16k limitation to fit into GPU during training.
---
**Post rebuttal notes**
The authors rebuttal and subsequent discussion addressed all my questions and concerns.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for taking the time to review our paper! We were happy to read that you appreciated our main points: that Unlimiformer can be applied to existing pre-trained architectures without the need for additional weights or tuning, our re-order of the computation of cross-attention can use a single index of the encoder tokens, and that Unlimiformer can be applied directly at test-time.
We are happy to hear that you think that there are no major weaknesses. We think that all your questions are addressable within this discussion period. Please see our response below. We will include these clarifications in the paper, and we would love to further address additional questions during the discussion period if anything is unclear.
**Q1. Fig. 2 shows encoder encoding disjoint chunks, whereas text lines L73-74 say it is overlapping chunks. Can you clarify and change the figure to represent that more accurately?**
We encoded overlapping chunks to have sufficient context, and then we took only the “middle” of each encoded chunk, such that eventually we have a single encoded vector for each input token. Thank you for this suggestion, we improved the figure, and the new version is now included as Figure 1 in the 1-page Figures PDF.
**Q2. For the k nearest neighbor indexing and search, do you use approximate nearest neighbors, or the exact nearest neighbors? Can you provide a small comparison to using exact nearest neighbors, and share whether or not it affects results (and/or time)?**
In GovReport and SummScreen, where the validation inputs are 20k-70k tokens long at most, we tried using exact nearest neighbors (a “flat” faiss index). However, using approximate nearest neighbors did not change the results **at all**.
Querying the index using approximate-NNs is much faster; however, it takes a few seconds to build the initial index for each example, so the time to process an entire test example was very similar overall.
**Q3. In the Comparison to Wu et. al. section, it would be good to highlight that Memorizing Transformer paper applies their approach to decoder-only models. Your approach is applied to the encoder-decoder model.**
We agree, and we will clarify this in the paper.
**Q4. Train chunked +test Unlimiformer (L137-141) requires more details [...]; Q4.1 how does this figure into evaluation?**
The first part of your description is correct: “chunk it into say C non-overlapping chunks, treat each (chunk, output) pair as a training example (to create C examples).” As you noted, we do not do retrieval during chunked training– the examples are treated as completely distinct. The idea in this chunking is to try to achieve the best results **without** increasing hardware requirements beyond the standard finetuning.
At test time, we take `(full input, output)` as a single training example, without using chunking at all, and apply Unlimiformer to process the entire input. Thus, chunking is a data augmentation choice at training-time that is independent of the model used. We will clarify.
**Q5. Perhaps a different way to separate out section 3.1 and 3.2 is to say that in 3.1. you are applying kNN look up during model evaluation, vs 3.2 where you are applying kNN strategy on the training data examples**
Your rephrasing here is correct and much clearer-- thank you for the suggestion!
**Q6. During retrieval training, for the training examples that are longer than the context length, do you just have 1 copy of the (input, output) pair example in training?**
Yes, we have only a single copy of each example in retrieval training. In the case where examples are longer than 16k tokens, we could employ chunking with chunk size 16k, but we found that the method worked well in practice without this additional step.
**Q7. In Table 4 when you say “Unlimiformer (this work)” do you mean the “Retrieval training” regime?**
This is the “alternating training” (alternating batches of “retrieval training” and “random-encoded training”) which worked best here; we will clarify this in the paper.
**Q8. In Table 5 for BookSum, why isn’t there a line reporting results for BART-base Unlimiformer (Retrieval Training)?**
We apologize for the omission– we will include these results in the next version, and in the 1-page Figures PDF (see Table 1).
**Q9. Table 4 doesn’t directly include PRIMERA base training (standard finetuning) results, but worth repeating that result line from Table 3**
“PRIMERA standard finetuning” is presented as “LED-large - PRIMERA” in Table 4. We agree that this is unclear, and we will clarify this.
**Q10. it seems like at least in practice, in the experiments reported in this paper the input length was restricted to max 16k tokens due to practical limitations. Is that correct to say?**
SLED’s limitation is during both training and inference; Unlimiformer has a practical limitation on length at training-time, but no practical length restriction at test-time. In all Unlimiformer results in the paper, we are using the full test-set inputs without any truncation at test time.
**Q11. [Comment] for the distribution plots: add a vertical line to indicate the average length**
This is a great idea! We included this suggestion in Table 2 in the 1-page Figures PDF.
**Perhaps worth noting that this is not applied to decoder-only models.**
In the scope of this paper, we focused on encoder-decoder models. We will clarify this.
Although we did not perform an extensive evaluation with LLama-2 (and thus it is out of the scope of the paper), our codebase (which we will publicly release) recently supports decoder-only models as well, such as LLama-2: in decoder-only models, we need to keep an index for each layer, but we can still leverage our Attention Reformulation (Section 2.3) to share the same index among all attention heads in the same layer.
**Also worth noting the 16k limitation to fit into GPU during training.**
Thank you, we will include this in the Limitations section.
---
Rebuttal Comment 1.1:
Title: Any evals on tasks other than summarization?
Comment: Thanks for your responses. it's good to also know about the extension to decoder-only models.
Have you evaluated on tasks other than summarization? Even if the inputs were not really long?
I agree with the concern raised by Reviewer ym6S, regarding eval on summarization tasks alone. Despite your response there, it does seem like a limitation. It would be interesting to know if you have done any evals on one or 2 other tasks, perhaps the others from the SCROLLS benchmark? This might be particularly relevant for the models that have been trained with the modified retrieval layer (retrieval/enc-dec/alternating) training. Even if it is on tasks where the input length is not large, it would be of interest to validate whether the model performance regresses on those tasks (and if it does to what extent, if it doesn't that's also useful to know).
---
Reply to Comment 1.1.1:
Comment: Thank you for your response!
We have queued experiments on more tasks from the SCROLLS benchmark, and we will report these results in the next few days. | Summary: This paper proposes to use kNN based search to replace the notoriously memory consuming quadratic attention in modern Transformers to allow extremely long sequence input. Proposed method is simple, and can be applied to any pre-trained Transformer. The proposed model, Unlimiformer, is evaluated on long text summarization tasks, and achieves significant improvement over baselines due to the increased context length. Note that the proposed method are shown to be effective without any finetuning on some tasks.
Strengths: S1. Sequence length is a major pain point of modern Transformer. This work targets a significant issue and achieves promising results.
S2. Proposed method can be applied to multiple pre-trained models (BART, PRIMERA)
Weaknesses: W1. Only evaluated on text summarization.
W2. Memory / Speed trade-offs are not quantitatively studied.
Evaluation on one single task is the major limitation of this work. The usage of Transformer is broad. For example, long document QA tasks should also be studied. Long Range Arena is also a useful benchmark for efficient Transformers. Beyond NLP, I am also curious if this can be applied to computer vision transformers such as ViT, to enable higher-resolution input images.
In addition, memory / speed trade-offs could benefit the readers for clearer guidelines when applying the proposed method. I strongly suggest adding this study. I am also curious how this method scale to LLMs larger than BART.
Currently, I think the strengths outweigh the weaknesses so I'm leaning toward acceptance.
--------------
update after rebuttal
===============
Both W1 and W2 are properly addressed by the authors. Additional results on SCROLLS shows that Unlimiformer can generalize to other NLP tasks beyond summarization and an additional pre-trained model T5 is also studied. The additional memory / speed study also demonstrate the advantage to the proposed method. Since this method can be applied to many tasks and many pre-trained transformer LMs, it has potential to achieve high impact in the community. I decided to increase my rating from 5 to 7. My confidence score is also raised from 3 to 4. I think this paper is a clear accept. The reason I'm still a little conservative is that I don't have first hand experience in working on these specific benchmarks.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Suggestions:
- I would tone down the claim of 'unlimited length', because CPU memory is, although typically larger than GPU memory, still practically limited. Instead, I suggest the authors to stress test the limit for a specific hardware (CPU&GPU) with synthetic long sequences so users have a practical reference.
- Consider citing more related works such as Cluster-Former [1], Set Transformer [2], and Fast Transformer [3].
--------------
update after rebuttal
===============
I don't have further questions and suggestions for the submission. Thanks for the response!
[1] Wang et al., "Cluster-Former: Clustering-based Sparse Transformer for Question Answering", Findings of ACL 2021
[2] Lee et al., "A framework for attention-based permutation-invariant neural networks", ICML 2019
[3] Vyas et al., "Fast transformers with clustered attention", NeurIPS 2020
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Limitations are properly addressed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for taking the time to review our paper! We were happy to read that you appreciated our main points: that Unlimiformer is simple, achieves significant improvements, is effective even without any finetuning, and that you think that the strengths outweigh the weaknesses.
We think that all your questions are addressable within this discussion period. Please see our response below. We would love to further address additional questions during the discussion period if anything is unclear.
**"The approach is evaluated only on text summarization [...] I am also curious if this can be applied to computer vision transformers such as ViT..."**
We focused on summarization because producing summaries with high ratios of compression requires using information from full, sometimes extreme-length inputs.
Thus, our evaluation includes long document summarization and book summarization, using 3 datasets (GovReport, SummScreen, BookSum) and 2 base models (BART, LED).
We are happy to hear that our approach sparks ideas for additional modalities, and we agree that it would be interesting to apply this approach to vision transformers, but we think that this is a bit out of the scope of the current paper, as we focus on natural language transformers.
We also have results for T5 as the base model, which we will include in the next version.
**"Speed trade-offs are not quantitatively studied"**
Processing extremely long inputs with Unlimiformer is not as “real-time” fast as vanilla Transformers yet, but we study speed trade-offs in Figure 4 and Section 6.
For example, when the input is 100k tokens long, **Unlimiformer is only ~3.5x slower while processing 100x more input** than the base model. This includes the additional time to encode the input.
**CPU memory is, although typically larger than GPU memory, still practically limited. Is Unlimiformer really unlimited?**
Of course, nothing in nature is unlimited. However, we argue that Unlimiformer is effectively unlimited since it can process any practical input, since Unlimiformer requires keeping only a single vector per input token. For example, using BART or T5-Large, where hidden states are of size 1024, encoding **1M tokens** using fp16 takes **only 2GB of memory**, which can fit multiple times on GPU memory, let alone on CPU memory. Additionally, if CPU memory is truly insufficient, the kNN implementation we used (faiss) supports saving indices on disk and loading only portions of them into memory at a time for search (see their documentation for details).
**"I am also curious how this method scale to LLMs larger than BART."**
Most of our experiments were conducted using BART-base or PRIMERA (Longformer-Encoder-Decoder-large) as the base models.
After the submission deadline, we managed to run experiments with T5 (base) on GovReport as well:
| Model | Rouge 1 / 2 / L / BERTScore |
|-----------------|-----------------------------|
| T5 - standard finetuning | 41.9 / 17.0 / 26.7 / 61.2 |
| T5 - Unlimiformer (alternate training) | **50.1** / **22.8** / **26.4** / **64.3** |
We will add this result to a revised version of the paper.
Although it’s out of the scope of our paper, our codebase supports LLama-2 and we managed to encode and summarize entire books (hundreds of thousands of tokens) using Unlimiformer-LLama-2 (13B) within few minutes, using only 2 A6000 GPUs.
We thus believe that in terms of scaling, our approach scales very well to larger models as well.
**"Consider citing more related works such as Cluster-Former [1], Set Transformer [2], and Fast Transformer [3]."**
Thank you, we will cite and discuss these works in our revised version.
---
Rebuttal Comment 1.1:
Title: Reviewer's response
Comment: Thanks for responding to my questions and concerns. I hope my comments / suggestions are helpful for your future revisions.
For the speed-memory tradeoff (W2), perhaps I wasn't clear in my original review. I was hoping to see some memory - speed plot when replacing 0, 1, 2, ... to all layers in the baseline Transformer with an UnlimiFormer layer under a fixed sequence length. Fig. 4 is good, but I'm just suggesting more experiments for a more complete study.
I understand that the authors focus on summarization and this is well motivated. However, this remains the main reason (W1) that I won't raise my final rating. A lot of efficient transformers have been proposed, and the most impactful ones are those experimented on multiple NLP tasks. For a task-specific method like this submission, perhaps the authors should refrain from using such a generalist paper title. Perhaps "Long-Range Transformer for Unlimited-Length Summarization" better fits the current scope.
---
Reply to Comment 1.1.1:
Comment: Thank you for your response!
**For W2:** Apologies for the misunderstanding! We have now run the analysis you proposed on BART-base+Unlimiformer (retrieval).
| Number of layers with Unlimiformer | Max GPU memory allocated | Total time taken | Entity Mention (EntMent) |
| ---------------------------------- | ------------------------ | ---------------- | ------- |
| 0 (normal inference) | 24.8% (11.9 GB) | 6m27s | 9.8 |
| 1 | 85.5% (41.0 GB) | 16m37s | 15.3 |
| 2 | 89.6% (43.0 GB) | 19m32s | 15.2 |
| 3 | 85.9% (41.2 GB) | 19m35s | 17.7 |
| 4 | 86.1% (41.3 GB) | 21m25s | 17.3 |
| 5 | 84.1% (40.4 GB) | 22m53s | 18.8 |
| 6 (all layers) | 85.2% (40.9 GB) | 26m14s | 20.3 |
Here, we are measuring maximum GPU memory allocated, total time, and Entity Mention over the full BookSum test set using a single A6000 GPU. The longest example in the test set is approximately 505k tokens.
The total time taken does increase when we apply Unlimiformer to more layers, but the memory allocated does not, because we use our attention reformulation to reuse the same datastore across layers. We see improved Entity Mention (EntMent) when we apply Unlimiformer on more layers, and applying Unlimiformer at all layers requires only ~3.4x more memory and ~4x more time than running the base model while processing inputs that are 100-500x longer.
(The slight differences in max memory allocated are because the profiler we used is taking snapshots at slightly different times, not because there is significant variation in memory usage between 1-6 Unlimiformer layers. We will average over repeated runs for stability when we report these results in the paper).
**For W1:** We have queued experiments on more tasks from the SCROLLS benchmark, and we will report these results in the next few days. | Rebuttal 1:
Rebuttal: We thank the reviewers for their time and feedback! We are encouraged that all reviewers have noted the benefits of Unlimiformer, including that it can be applied to pretrained models with no additional training, has sublinear inference time w.r.t. the length of the input, and leads to significant performance improvements.
We have provided more details and additional analysis of our results in the responses to each individual reviewer. In the rebuttal doc, we have provided:
* Figure 1: an updated version of the paper's Figure 2, using feedback from reviewer cHBL.
* Figure 2: a new figure demonstrating the locations retrieved from during decoding, to address questions from reviewers EQX8 and VEz6.
* Table 1: an updated version of the paper's Table 5 with results for additional Unlimiformer settings on BookSum, as reviewer cHBL requested.
* Table 2: an updated version of the paper's Table 2 with updated visualizations for dataset input lengths, using feedback from reviewer cHBL.
We look forward to answering any additional questions in the discussion period.
Pdf: /pdf/23e296c897a24ad346109c8646fde175607a7dcc.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: Unlimiformer: Transformers with Unlimited Input Length:- proposes a novel method to overcome the context window limitation in encoder-decoder Transformer models. The key innovation introduced in this paper is a retrieval-based method that integrates a k-Nearest Neighbors (kNN) search into each decoder layer of the model. This approach allows each attention head to choose different context windows from the full-length input at each decoding step.
The authors propose an encoding process for long input sequences that uses the model's encoder on overlapping chunks of the input. To ensure sufficient context, only the middle half of the encoded vectors from each chunk are kept.
The authors diverge from the standard Transformer cross-attention mechanism by retrieving the top-k hidden states from the entire input sequence. The proposed method involves a mathematical reformulation of the attention mechanism to incorporate the kNN search into the cross-attention process more efficiently.
Strengths: This approach can be integrated into any existing pretrained encoder-decoder transformer model to enable summary over unbounded inputs (subject to computational constraints).
This approach does not require retraining of the model, although further finetuning appears to improve performance.
The paper is well written and clear.
Weaknesses: The computational cost of encoding the entire input could be very high at inference time.
Many of the chosen benchmark approaches appear to be pretty arbitrary and not very convincing.
It is unclear to what extent the overlapping approach to encoding inputs aids the approach. Text is order dependent and filler words serve a purpose, which in many cases affects the semantic meaning of a particular word or sentence. A kNN approach over the most relevant words across the entire inputs could just find correlated tokens that lack any local context in how they were used within a particular phrase or sentence..
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: How is k chosen? is it always set to the base models context window length? What happens as k is varied?
It would be good to have a better idea of the computational performance of this kNN-based approach and how it scale. Especially in terms of compute, latency and memory. It seems that it would be prohibitively expensive to query the datastore at each timestep?
What is the motivation behind the training methods shown in section 3.2.? How does it weakly simulate nearest neighbours?
Whenever there is mention of surprising results with regard to the model being able to perform well with limited context, or the full input being unnecessary to produce a summary. How confident are you that the base model has not been trained on WikiSum or any of the datasets (and its derivatives) that it is being evaluated on?
In figure 3, what causes the dip at 64k and 100k datastore?
By CPU datastore, do you just mean system RAM? if so, please mention that.
Could you provide examplers of what actually gets retrieved by kNN? As in, the parts of the input that are being used by the model for particular queries?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: No broader impact statement is provided, but its conclusion would strengthen the paper
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for taking the time to review our paper! We were happy to read that you appreciated our main points: Unlimiformer can be integrated into existing pretrained encoder-decoders, makes it possible to summarize unbounded inputs, and does not require retraining.
We think that all your questions are addressable-- please see our response below. We would love to discuss further if anything is unclear.
**The computational cost of encoding the entire input could be very high.**
In order to reason over the entire input, we need to process this input *somehow*. A cost of the base model's encoding cost scaled linearly with the input’s length is comparable to other methods in the literature (e.g. SLED, Memorizing Transformers).
**"Many of the chosen benchmark approaches appear to be pretty arbitrary"**
For datasets - we evaluated Unlimiformer on GovReport and SummScreen, which are the two main summarization datasets from the SCROLLS benchmark (Shaham et al., EMNLP’2022). To evaluate over *even longer* inputs, we included an evaluation on BookSum (Kryscinski et al., EMNLP’2022) as well.
For baselines - we are not aware of any other long-range transformers that can utilize pretrained models, and can process long inputs without re-training. We thus compared Unlimiformer to the most relevant and related models: Memorizing Transformers (Wu et al., ICLR’2022), Longformer-Encoder-Decoder (LED; Beltagy et al, 2020) with the summarization-specific pretraining of PRIMERA (Xiao et al, 2022, ACL’2022), and SLED (Ivgy et al., TACL’2023).
**Does the overlapping approach to encoding inputs help?**
On average, encoding inputs in overlapping chunks led to a slight increase in performance (about 0.5 / 0.5 / 0.4 R1/R2/RL).
**If you retrieve individual words, they might be retrieved out of their local context**
Note that we retrieve **encoded hidden states** (vectors), from the output of the top layer of the encoder, rather than retrieving raw tokens.
These retrieved vectors are already-contextualized by the encoder, and are thus “aware” of their local context.
**How is k chosen?**
We fixed the number of nearest neighbors that we retrieved to be equal to the size of the vanilla model’s context size (e.g., 1024), such that Unlimiformer fully replaces the cross-attention to encoder hidden states. Setting k larger than this would require training, because the decoder was not trained to attend to more keys than its context window size. We leave this for future work.
**How does it scale in terms of compute, latency and memory?**
When the input is 100k tokens long, **Unlimiformer is only ~3.5x slower while processing 100x more input** than the base model (see Figure 4 in the main paper). There is some fixed additional cost to construct an index (a few seconds), but decoding is quite fast because retrieval from the index is sub-linear.
The additional memory needed scales linearly, but with a small cost per token, since Unlimiformer requires keeping only a single vector per input token. Using hidden states of size 1024, encoding **1M tokens** using fp16 takes **only 2GB of memory**. The index containing these hidden states can be stored in the GPU memory or in RAM, depending on the compute availability.
**What is the motivation behind the training methods in section 3.2.?**
“Retrieval Training” is a training approach that simulates the test time computation exactly: every cross-attention head performs kNN-search and attends to the retrieved keys. However, we believe that this training approach makes every cross-attention head attend only to “relevant keys” at training time, and thus cross-attention never learns to *down-weight* irrelevant keys. Thus, “Random-Encoded training” selects a random subset of $k$ encoded tokens for each cross-attention head at training time (to expose heads to "irrelevant" keys), while applying kNN at test time. “Alternating training” alternates batches of these methods.
We believe that these motivations were not explained well enough in the paper, and we will include them in the revised version. Thank you!
**How confident are you that the base model has not been trained on the datasets?**
This is a great question about “leakage” of test data from the pretraining data. In WikiSum, we agree that most models are likely pretrained on the entire English Wikipedia, and this is another explanation for the surprisingly strong performance of the baseline here. We will add this to the paper.
The other datasets are not explicitly included in BART’s pretraining data, especially not as pairs of documents/books and their summaries, though it is possible that some were indirectly included through OpenWebText. Although we believe that there is no serious leakage, any leakage would only make the baseline model’s performance be closer to Unlimiformer.
**In figure 3, what causes the dip at 64k and 100k datastore?**
We believe that this is mostly a limitation of evaluating such long and information-heavy generations. Although there is a slight drop at 64k and 100k, performance there is still significantly better than the vanilla BART base. The general trend is that processing longer context leads to better outputs.
**By CPU datastore, do you mean system RAM?**
Yes-- at inference time, we stored the hidden state vectors in a datastore constructed with the library faiss, either in GPU memory (VRAM) or RAM.
**What actually gets retrieved by kNN?**
Note that we retrieve **encoded hidden states** (vectors) rather than tokens, so even vectors that were not directly retrieved by the decoder impacted the final output. When we plotted the frequency of retrieval for keys across the full decoding process in BookSum, we found no strong skew or pattern in the retrieved keys, and keys from the entire input were used by the model (Figure 2 in the rebuttal PDF).
**A broader impact statement would strengthen the paper**
We will include one in the next version-- thank you for the suggestion! | null | null | null | null | null | null |
Gold-YOLO: Efficient Object Detector via Gather-and-Distribute Mechanism | Accept (poster) | Summary: In this paper, the authors proposed a Gather-and-Distribute mechanism (GD) for efficient information exchange in YOLOs by globally fusing multi-level features and injecting the global information into higher levels. The proposed GD-YOLO architectures show good results compared with the existing YOLO series. The authors also presented a pre-training method, where we pre-train the backbone on ImageNet 1K using the MAE method, which improves the convergence speed and accuracy of the model.
Strengths: 1. In this paper, the authors proposed a gather-and-distribute mechanism to replace the traditional FPN structure. By using this unified module to gather and fuse information from all levels and subsequently distribute it to different levels, they can avoid the loss of information inherent in the traditional FPN structure and also enhance the neck’s partial information fusion capabilities without significantly increasing latency.
2. The paper is clear and organized, and easy to follow.
3. Authors have provided a comprehensive comparison between the proposed model and YOLOs.
Weaknesses: 1. The motivation is a little bit unclear. The advantage of this GD mechanism compared with traditional FPN are not very clear.
2. Based on the results, the tradeoff between accuracy and latency is pretty similar compared with baselines (YOLOv8).
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: If the best trade-off between latency and accuracy is the main goal of this method, isn't fairer to compare with YOLOv8 instead of YOLOv6, since YOLOv8 has the closest latency with GDYolo?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: 1. The idea is pretty interesting but the advantage of this proposed method doesn't show a very convincing result.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ### W-1: The motivation is a little bit unclear. The advantage of this GD mechanism compared with traditional FPN are not very clear.
Thank you for your suggestions. In traditional concepts, features at different levels contain positional information of different-sized objects. Larger features encompass more low-dimensional texture information and the positions of smaller objects, while smaller features contain more high-dimensional information and the positions of larger objects. The original motivation behind traditional Feature Pyramid Networks (FPN) is that although features at different levels contain different information, these pieces of information can assist each other and enhance the network's performance. Many previous works have addressed the issue of information loss in interactions between different levels. However, due to the excessive number of paths and indirect interaction methods in the network, the previous FPN-based fusion structures still have drawbacks in low speed, cross-level information exchange and information loss.
Here we focus on the efficiency of information interaction fusion and the integrity of information retention. Beyond FPN-based fusion structures, we propose a new scheme, i.e., gather-and-distribute mechanism which is new information fusion structure for detection, not a revised version of FPN. The visualized feature maps of different-level Class Activation Maps (CAM) of Figure 3 can be found in the global response PDF.
We can observe that features at different levels exhibit distinct preferences for objects of different sizes. In the traditional grid-structured FPN, with increasing network depth and information interaction between different levels, the sensitivity of the feature map to object positions gradually diminishes, accompanied by information loss. Our proposed GD mechanism performs global fusion separately for high-level and low-level information, resulting in globally fused features that contain abundant position information for objects of various sizes. When injected into different branches, this not only enriches the information in each branch but also avoids information loss caused by "recursive" interaction.
### W-2: Based on the results, the tradeoff between accuracy and latency is pretty similar compared with baselines (YOLOv8).
Thank you for your question. Our model shows significant performance improvements compared to previous works in the sizes of N/S/M (nano, small, medium), including YOLOv8. For YOLO-L, our GD-YOLO-L also exhibits performance improvements of +0/+9FPS and +0.6 AP/+1.1 AP compared to YOLOv6-3.0 and YOLOv8, respectively.
### Q-1: If the best trade-off between latency and accuracy is the main goal of this method, isn't fairer to compare with YOLOv8 instead of YOLOv6, since YOLOv8 has the closest latency with GDYolo?
Thank you for your question. It's perfectly normal to have such confusion due to the naming inconsistencies in the YOLOv6 series, which has led to misunderstandings for many. The chronological order of model releases is as follows: YOLOv6, YOLOv7, YOLOv6-2.0, YOLOv8, YOLOv6-3.0. YOLOv6-3.0 was released after YOLOv8 and significantly improved model accuracy, allowing YOLOv6-3.0 to surpass YOLOv8 on the precision-speed curve. This result is also evident in our precision-speed curve graph.
### L-1: The idea is pretty interesting but the advantage of this proposed method doesn't show a very convincing result.
Thank you for your question. Our core contribution lies in the proposal of the Gather-and-Distribute Mechanism (GD mechanism), which is fast and effective. Our method gathers the global information, and distribute this information into differtnt levels. The GD mechanism is performed in both high resolution and low resolution to fully encourage information exchange. By separately considering low-dimensional and high-dimensional information, we have constructed a global information fusion mechanism, thereby unifying the approaches for promoting information flow between different levels feature that were previously disparate. Our proposed GD mechanism represents a more comprehensive improvement over previous SOTA detectors.
The GD mechanism is a general concept and can be applied beyond YOLOs. We have extend GD mechanism to other models and obtain significant improvement as show in the Experiment-1, you can find in the Global Author Rebuttal. The aforementioned experiments demonstrate that our proposed GD mechanism exhibits robust adaptability and generalization. This mechanism consistently brings about performance improvements across different tasks and models.
---
Rebuttal 2:
Title: End of the discussion window approaching
Comment: Dear anonymous reviewers,
Thank you for your constructive comments and valuable suggestions to improve this paper. If you have any more questions, we would be glad to discuss them with you.
Thank you very much.
Best regards, Author | Summary: In this research, the authors propose a novel Gather-and-Distribute (GD) mechanism implemented through convolution and self-attention operations. This mechanism, incorporated into the GD-YOLO model, significantly enhances multi-scale feature fusion capabilities and achieves a remarkable balance between latency and accuracy across all model scales. Notably, the researchers introduce MAE-style pretraining to the YOLO-series models for the first time, enabling unsupervised pretraining benefits. The GD-YOLO-N variant achieves exceptional results, with a 39.9% Average Precision (AP) on the COCO val2017 dataset and a remarkable 1030 Frames Per Second (FPS) on a T4 GPU. These results surpass the previous state-of-the-art model, YOLOv6-3.0-N, with a similar FPS by 2.4%.
Strengths: GD-YOLO is a quite impressive work, I think as it globalized the features from local and can retrieve features without tagging other layers feature with it.
Weaknesses: 1. While GD-YOLO-N cannot use the proposed backbone for limited capacity, what will be the approaches for mobile deployment as your main objective is YOLO for mobile deployment?
2. GD-YOLO-N implementation is not quite understandable while it misses a few parts like MIM and seems pretty same as YOLOv6 with EWA and AAT as stated.
3. Comparison with the Transformer based model could add a bit more value to this paper.
4. Abbreviations should be added after first introduction and then using the abbreviation is the ideal practice. For instance, in line 46, it should have been “... use Feature Pyramid Network (FPN) and…” instead of just FPN.
5. Line 45 to 54: This para basically represents the contribution part, which can be presented in bullet terms.
6. Line 58: “...SOTA YOLOv6…” : As per my knowledge, at this point, YOLOv8 is the SOTA. I would request for a checkup and update of this from your end.
7. Line 74-75: What strengths YOLOv8 takes from their predecessor, it is not mentioned.
8. Line 94: “In this study, we will…..” - it is recommended to use present tense, instead of future tense.
9. Table 1: Best values should be bolded.
Minor:
1. Line 32: “...detectors.Despite…, Line 138: “...alignment module(Low-FAM)...” - Spacing issue.
2. Line 257: Double comma issue.
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: 1. Why need to use LAMB Optimizer while already using SGD and manual scheduled Learning rate?
2. If you use EMA, then why again, LAMB is needed?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 3 good
Limitations: Limitations of GD-YOLO not added; however, I would suggest adding it.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ### W-1: While GD-YOLO-N cannot use the proposed backbone for limited capacity, what will be the approaches for mobile deployment as your main objective is YOLO for mobile deployment?
Thank you for your question. The main contribution of our work is the Gather-and-Distribute mechanism. Regarding the backbone, we are the first to pre-train the convolutional MIM in the YOLO series, aiming to explore the effect of MIM on the CNN backbone for real-time object detection task. We also compared the effects of MIM pre-training with ImageNet classification supervised pre-training on GD-YOLO-S in the appendix, and found a certain improvement. The reason for not implementing pre-training on GD-YOLO-N is that it is the most lightweight network within GD-YOLO. As such, its backbone struggles to show improvement in the MIM task.
For mobile deployment, we have demonstrated strong support on ONNX and TensorRT, which clears the path for deploying our model on mobile devices.
### W-2: GD-YOLO-N implementation is not quite understandable while it misses a few parts like MIM and seems pretty same as YOLOv6 with EWA and AAT as stated.
We understand your concerns about the GD-YOLO-N implementation. As you correctly noted, GD-YOLO-N does not incorporate the Mutual Information Maximization (MIM) pre-training, unlike our other models such as GD-YOLO-S. This decision was made because GD-YOLO-N is designed to be a lightweight model. Its backbone, due to its simplicity and size, does not benefit substantially from MIM pre-training.
GD-YOLO-N might appear similar to YOLOv6 with EWA and AAT because these are common techniques used to improve performance in object detection tasks. However, our work introduces the Gather-and-Distribute mechanism, which we believe adds a unique value to our model. This mechanism, in conjunction with our other design choices, aims to strike a balance between model complexity and performance, particularly for deployment on mobile devices with limited computational resources.
I hope this clarifies your concerns about GD-YOLO-N. Please feel free to ask if you have further questions.
### W-3: Comparison with the Transformer based model could add a bit more value to this paper.
Thank you for your question. We have added a chart of object detection model based on Transformer .
| model | #Params | GFLOPs | FPS bs=1 | APval | Apval-50 |
| -------------------- | ------- | ------ | -------- | ----- | -------- |
| RT-DETR-L | 32 | 110 | 114 | 53 | 71.6 |
| RT-DETR-X | 67 | 234 | 74 | 54.8 | 73.1 |
| DINO-Deformable-DETR | 47 | 279 | 5 | 50.9 | 69 |
| GD-YOLO-N | 5.6 | 12.1 | 563 | 39.9 | 55.9 |
| GD-YOLO-S | 21.5 | 46 | 286 | 46.4 | 63.4 |
| GD-YOLO-M | 41.3 | 87.5 | 152 | 51.1 | 68.5 |
| GD-YOLO-L | 75.1 | 151.7 | 88 | 53.3 | 70.9 |
This highlights the absolute performance advantage of GD-YOLO in the domain of small models . Despite RT-DETR significant advantages over YOLO-L/X, due to the structural constraints of the transformer, RT-DETR cannot be further downsized. This makes GD-YOLO still the best choice in the field of small models.
### W-6: Line 58: “...SOTA YOLOv6…” : As per my knowledge, at this point, YOLOv8 is the SOTA. I would request for a checkup and update of this from your end.
Thank you for your question. It's perfectly normal to have such confusion due to the naming inconsistencies in the YOLOv6 series, which has led to misunderstandings for many. The chronological order of model releases is as follows: YOLOv6, YOLOv7, YOLOv6-2.0, YOLOv8, YOLOv6-3.0. YOLOv6-3.0 was released after YOLOv8 and significantly improved model accuracy, allowing YOLOv6-3.0 to surpass YOLOv8 on the precision-speed curve. This result is also evident in our precision-speed curve graph.
### W-7: Line 74-75: What strengths YOLOv8 takes from their predecessor, it is not mentioned.
Thank you for your question. YOLOv8 introduces a new Conv block called C2f, which compared to YOLOv5's C3 block, enhances the gradient path even further. In terms of loss calculation, YOLOv8 adopts the TaskAlignedAssigner positive sample assignment strategy and introduces the Distribution Focal Loss. YOLOv8 is more of an integration of design elements from various YOLO series models such as YOLOX, YOLOv6, and YOLOv7, and leans more towards engineering.
### Format problem:
- W-4: Abbreviations should be added after first introduction and then using the abbreviation is the ideal practice. For instance, in line 46, it should have been “... use Feature Pyramid Network (FPN) and…” instead of just FPN.
- W-8: Line 94: “In this study, we will…..” - it is recommended to use present tense, instead of future tense.
- W-9: Table 1: Best values should be bolded.
- Minor:
1. Line 32: “...detectors.Despite…, Line 138: “...alignment module(Low-FAM)...” - Spacing issue.
2. Line 257: Double comma issue.
Thanks for your suggestions, we will correct these formatting and grammar problems in the revision.
### Q-1: Why need to use LAMB Optimizer while already using SGD and manual scheduled Learning rate?
Thank you for your question. We utilize the LAMB optimizer only for MIM pre-trained backbones, following the practices in SparK. For the rest of the training process and self-distillation, we adopt SGD.
### Q-2: If you use EMA, then why again, LAMB is needed?
Thank you for your question. As with Q-1, we only followed SparK's design in pre-training, using the LAMB optimizer.
### Limitations: Limitations of GD-YOLO not added; however, I would suggest adding it.
Thank you for your question. Limitations can be found in section C Broader impacts and limitations in the Appendix.
---
Rebuttal 2:
Title: End of the discussion window approaching
Comment: Dear anonymous reviewers,
Thank you for your constructive comments and valuable suggestions to improve this paper. If you have any more questions, we would be glad to discuss them with you.
Thank you very much.
Best regards, Author | Summary: This paper studies the problem of efficient object detector and proposes the Gather-and-Distribute mechanism (GD) mechanism to alleviate the information fusion problem. The experiments on the COCO dataset demonstrate the effectiveness of the proposed method.
Strengths: + This paper studies an important topic, efficient object detection, and achieves a great balance between accuracy and speed in its results.
+ The ablation studies have been provided to verify the effectiveness of the proposed module.
Weaknesses: - The structural design of this paper is quite confusing in some aspects, such as the choice of where to inject information. For example, in Low-GD, semantic information is only injected into P3 and P4, while one would expect that global semantic information could also benefit the P5 branch. Similarly, in High-GD, information is only injected into N4 and N5. The authors should provide the rationale and advantages of this design choice to address these doubts.
- In line #157, the authors mention they were inspired by [28]. However, the injection design is more closely related to Topformer, which bears greater relevance to the structure of the current paper. The authors should consider discussing and citing Topformer as another source of inspiration or relevant related work.
[a] TopFormer: Token Pyramid Transformer for Mobile Semantic Segmentation, CVPR 2022.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Please refer to the Weaknesses.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: None
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ### W-1: The structural design of this paper is quite confusing in some aspects, such as the choice of where to inject information. For example, in Low-GD, semantic information is only injected into P3 and P4, while one would expect that global semantic information could also benefit the P5 branch. Similarly, in High-GD, information is only injected into N4 and N5. The authors should provide the rationale and advantages of this design choice to address these doubts.
Thank you for your question. When designing the fusion mechanism, we aim for the two branches in the network to focus on low-dimensional and high-dimensional information respectively. Following this principle, we have designed and implemented two branches: Low-GD and High-GD. This design approach results in the fused features from the two branches having richer low-dimensional and high-dimensional information respectively.
Additionally, it is commonly believed that features at different levels contain positional information of objects of varying sizes. For instance, larger features encompass more low-dimensional texture information and positions of smaller objects, while smaller features contain more high-dimensional structural information and positions of larger objects. This aligns well with the characteristics of the two branches in the GD mechanism. Therefore, the global information generated by Low-GD is injected into feature maps with larger feature sizes, whereas the global information generated by High-GD is injected into feature maps with smaller feature sizes.
Such a choice is also based on a trade-off between accuracy and speed. We conducted ablation experiments to validate this idea:
| model | Inject level | AP | AP-50 | FPS-32 | Params | FLOPs |
| --------- | --------------------- | ----------- | ----------- | ------ | ------ | ------- |
| GD-YOLO-S | [P3/P4]-[N4/N5] | 45.4 / 46.1 | 62.5 / 63.3 | 446 | 21.5 M | 46.00 G |
| GD-YOLO-S | [P3/P4/P5]-[N3/N4/N5] | 46.3 / 46.6 | 63.4 / 64.0 | 397.76 | 23.2 M | 50.43 G |
The experimental results indicate that as the inject level increases, the model's accuracy also increases. However, at the same time, the speed of the model decreases. The final version provided in the paper represents our optimal choice in balancing speed and accuracy.
### W-2: In line #157, the authors mention they were inspired by [28]. However, the injection design is more closely related to Topformer, which bears greater relevance to the structure of the current paper. The authors should consider discussing and citing Topformer as another source of inspiration or relevant related work.
Thank you for your reminder. Topformer and Seaformer are both important sources of inspiration for us. We’ll include the discussion with M2Det [1] and RHF-Net [2] in our final version:
Built upon the concept of global information fusion, TopFormer has achieved remarkable results in semantic segmentation tasks. Expanding on the foundation of TopFormer, we have taken a step further by separately considering high-dimensional and low-dimensional information. We have meticulously designed two branches, namely Low-GD and High-GD, infusing the notion of global information fusion into the realm of object detection. As a result, we have achieved SOTA performance at object detection task.
---
Rebuttal 2:
Title: End of the discussion window approaching
Comment: Dear anonymous reviewers,
Thank you for your constructive comments and valuable suggestions to improve this paper. If you have any more questions, we would be glad to discuss them with you.
Thank you very much.
Best regards, Author | Summary: In this paper, the authors proposed an efficient object detection network to make a new trade-off between efficiency and effectivity. In the framework, a lightweight adjacent-layer fusion module termed as gather-and-distribute (GD) mechanism is proposed to take place the conventional neck module in general detectors. Experimental results on COCO built-on the YOLO-series off-the-peg detectors demonstrate the effectiveness of the proposed method.
Strengths: -The bilateral interaction of information in multi-layer layers is interesting, and I like the work that simple changes can bring significant improvement for foundational tasks.
-The experimental setup of ablation studies on COCO is helpful for follow-up work in the future.
Weaknesses: The main weakness of this paper is the limited technical novelty and the relatively inadequate experiments. First, although the authors try to explain that the proposed neck module of the GD mechanism can improve the detection accuracy of the detection task and benefit the computational efficiency, the novelty of the proposed module can be seen as a compromise similar to the stack of the existing technologies. Additionally, the experiment is carried out only on YOLO-series methods, and there is a lack of testing and verification on other datasets.
This paper needs further modification in terms of layout design, e.g., the fonts in the Fig.4 are generally small and unclear, which is very difficult for review.
Page 3 Line 82-83, Improving ->can improve, introduce -> introduced.
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: The innovation of the proposed method needs to be improved comprehensively, and there is a lack of more substantial analysis and understanding in terms of innovation in technology. Then, it is necessary to verify the scalability and robustness of the proposed method in more experiments and even more tasks. In addition, the writing and layout concept of the article still needs to be improved.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 3 good
Limitations: Applicable
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ### W1: The novelty of the proposed module can be seen as a compromise similar to the stack of the existing technologies, and lack of testing and verification on other datasets.
Thank you for your suggestions. Our core contribution lies in the proposal of the Gather-and-Distribute Mechanism (GD mechanism), By separately considering low-dimensional and high-dimensional information, we have constructed a global information fusion mechanism, thereby unifying the approaches for promoting information flow between different levels feature that were previously disparate. Our proposed GD mechanism represents a more comprehensive improvement over previous works.
In the process of network construction, we have drawn from and built upon the experiences of previous researchers. Rather than focusing solely on performance improvements achieved by enhancing specific operators or local structures, our emphasis lies on the conceptual shift brought about by the GD mechanism in comparison to the traditional FPN structure. Through the GD mechanism, achieving SOTA performance for the YOLO model is attainable using simple and easily applicable operators. This serves as strong evidence for the effectiveness of the proposed approach.
Additionally, during the network construction process, we intentionally opted for simple and thoroughly validated structures. This choice serves to prevent potential development and performance issues arising from certain operators being unsupported by deployment devices. As a result, it guarantees the usability and portability of the entire mechanism. Moreover, this decision also creates opportunities for future performance enhancements.
The GD mechanism is a general concept and can be applied beyond YOLOs. We have extend GD mechanism to other models and obtain significant improvement as show in the Experiment-1, you can find in the Global Author Rebuttal. The aforementioned experiments demonstrate that our proposed GD mechanism exhibits robust adaptability and generalization. This mechanism consistently brings about performance improvements across different tasks and models.
### Format problem
- W-2: Fig.4 are generally small and unclear
- W-3: Page 3 Line 82-83, Improving ->can improve, introduce -> introduced.
Thanks for your suggestions, we will correct these formatting and grammar problems in the revision.
### Q-1: Lack of in-depth analysis and understanding regarding innovation, along with insufficient evidence of the scalability and robustness of the proposed method.
Thank you for your suggestions. We’ll include more discussion with related works on multi-scale features in the final version. The specific content please refer to the Discuss-1 in global response.
In order to demonstrate the scalability and robustness of our proposed GD mechanism, in addition to the experiments conducted in semantic segmentation and instance segmentation tasks as mentioned in W1, we have conducted the following additional experiments to supplement our idea:
| model| AP| AP-50|FPS-32|Params|FLOPs|
|-|-|-|-|-|-|
| GD-YOLO-S|45.4 / 46.1| 62.5 / 63.3| 446| 21.5 M | 46.00 G |
| GD-YOLO-S-all_trans | 46.9 / 47.0 | 64.1 / 64.4 | 258.56 | 25.2 M | 57.95 G|
| GD-YOLO-S-all_conv| 45.6 / 46.1 | 62.5 / 63.2 | 417.76 |26.6 M| 47.06 G|
| GD-YOLO-S-inject_all| 46.3 / 46.6 | 63.4 / 64.0 | 397.76 |23.2 M| 50.43 G|
We have developed three variants based on GD-YOLO-S:
1. GD-YOLO-S-all_trans: Replaces the fusion operators in both Low-IFM and High-IFM with Transformers.
2. GD-YOLO-S-all_conv: Replaces the fusion operators in both Low-IFM and High-IFM with Convolutions.
3. GD-YOLO-S-inject_all: Adds Inject models to all P-level and N-level, injecting global information into each level.
From the results of experiments 1 and 2, it is evident that the model's performance does not significantly degrade with changes in the operators within the fusion module. This demonstrates the strong robustness of the GD mechanism and its insensitivity to operators. The GD mechanism can be effortlessly combined with any operators, introducing new features by incorporating new operators.
In experiment 3, we extended the number of layers for information injection within the GD mechanism, resulting in improved accuracy at the cost of reduced speed. Additionally, in section 4.3.1 "Ablation study on GD structure" of the main text, we conducted detailed ablation experiments on various modules, further demonstrating the scalability of the proposed GD mechanism. In practical applications or different tasks, these modules can be freely combined and adjusted based on specific requirements to achieve optimal performance.
As per your suggestions, we will include a more in-depth analysis of multi-scale features in the revised version. And correct the grammar problems in the article, improve the graphic layout.
---
Rebuttal 2:
Title: End of the discussion window approaching
Comment: Dear anonymous reviewers,
Thank you for your constructive comments and valuable suggestions to improve this paper. If you have any more questions, we would be glad to discuss them with you.
Thank you very much.
Best regards, Author | Rebuttal 1:
Rebuttal: ### Experiment-1:
- Instance Segmentation Task
Replace different Necks in Mask R-CNN and train/test on the COCO instance dataset.
| model| Neck|FPS|Bbox mAP | Bbox mAP:50 | Segm mAP | Segm mAP:50 |
|-|-|-|-|-|-|-|
| MaskRCNN-ResNet50| FPN| 21.6|38.2| 58.8| 34.7| 55.7|
| MaskRCNN-ResNet50| AFPN| 19.1|36.0|53.6| 31.8| 50.7|
| MaskRCNN-ResNet50|PAFPN | 20.2|37.9|58.6| 34.5| 55.3|
| MaskRCNN-ResNet50| GD| 18.7| 40.7|59.5| 36.0|56.4|
- Semantic Segmentation Task
Replace the Neck in PointRend and train/test on the Cityscapes dataset.
| model|Neck|FPS| mIoU| mAcc|aAcc|
| - |-|-|-|-|-|
| PointRend-ResNet50|FPN|11.21| 76.47| 84.05| 95.96|
| PointRend-ResNet50| GD|11.07| 78.54| 85.60| 96.12|
| pointrend-ResNet101| FPN|8.76| 78.3| 85.705|96.23|
| PointRend-ResNet101| GD|11.07|80.01|86.15|96.34|
- Performance of GD mechanism on other object detection models
Replace necks in efficientdet and train/test on the COCO instance dataset.
| model|Neck|FPS|AP|
|-|-|-|-|
| EfficientDet|BiFPN| 6.0|34.4 |
| EfficientDet|GD|5.7|38.8|
### Experiment-2:
- FPN-like model comparison
| Model|FPS| AP-val2017 |
|-|-|-|
| M2Det| 15.178| 37.8|
| AFPN-R50-640x640 | 28.1| 39|
| AFPN-R50-800x1000 | 23.8| 41|
| AFPN-R101-800x1000 | 16.5| 42.3|
| YOLOF| 36.63| 37.7|
| YOLOF-R101| 25.38| 39.8|
| YOLOF-X101| 10.88| 42.3|
| CFPNet-S| 331| 41.1|
| CFPNet-M| 165| 46.4|
| CFPNet-L| 95| 49.4|
| GD-YOLO-N| 684| 39.9|
| GD-YOLO-S| 337| 46.4|
| GD-YOLO-M| 177| 51.1|
| GD-YOLO-L| 110| 53.3|
### Discuss-1: A exploration of the motivation behind GD, as well as in-depth discussions on FPN and multi-scale features.
Traditionally, features at different levels carry positional information about objects of various sizes. Larger features encompass low-dimensional texture details and positions of smaller objects. In contrast, smaller features contain high-dimensional information and positions of larger objects. The original idea behind Feature Pyramid Networks (FPN) is that these diverse pieces of information can enhance network performance through mutual assistance. Previous works have addressed the information loss problem when interacting between different levels. For instance, M2Det[1] introduced an efficient MLFPN architecture with U-shape and Feature Fusion Modules. Ping-Yang Chen[2] improved interaction between deep and shallow layers using bidirectional fusion modules. Unlike these inter-layer works, [3] explored individual feature information using the Centralized Feature Pyramid (CFP) method. Additionally, [4] extended FPN with the Asymptotic Feature Pyramid Network (AFPN) to interact across non-adjacent layers. In response to FPN's limitations in detecting large objects, [5] proposed a refined FPN structure. YOLO-F [6] achieved state-of-the-art performance with single-level features. SAFNet [7] introduced Adaptive Feature Fusion and Self-Enhanced Modules. [8] presented a parallel FPN structure for object detection with bi-directional fusion.
However, due to the excessive number of paths and indirect interaction methods in the network, the previous FPN-based fusion structures still have drawbacks in low speed, cross-level information exchange and information loss.
Here we focus on the efficiency of information interaction fusion and the integrity of information retention. Beyond FPN-based fusion structures, we propose a new scheme, i.e., gather-and-distribute mechanism which is new information fusion structure for detection, not a revised version of FPN. As shown in the visualized feature maps of different-level Class Activation Maps (CAM) in the appendix.
We can observe that features at different levels exhibit distinct preferences for objects of different sizes. In the traditional grid-structured FPN, with increasing network depth and information interaction between different levels, the sensitivity of the feature map to object positions gradually diminishes, accompanied by information loss. Our proposed GD mechanism performs global fusion separately for high-level and low-level information, resulting in globally fused features that contain abundant position information for objects of various sizes. When injected into different branches, this not only enriches the information in each branch but also avoids information loss caused by "recursive" interaction.
### Discuss-2: Differences in the feature alignment module between GD-YOLO and other similar works
Both M2Det [1] and RHF-Net [1] have incorporated additional information fusion modules within their alignment modules. In M2Det, the SFAM module includes an SE block, while in RHF-Net, the Spatial Pyramid Pooling block is augmented with a Bottleneck layer. In contrast to M2Det and RHF-Net, GD-YOLO leans towards functional separation among modules, segregating feature alignment and feature fusion into distinct modules. Specifically, the FAM module in GD-YOLO focuses solely on feature alignment. This ensures computational efficiency within the FAM module. And the LAF module efficiently merges features from various levels with minimal computational cost, leaving a greater portion of fusion and injection functionalities to other modules.
Based on the GD mechanism, achieving SOTA performance for the YOLO model can be accomplished using simple and easily accessible operators. This strongly demonstrates the effectiveness of the approach we propose. Additionally, during the network construction process, we intentionally opted for simple and thoroughly validated structures. This choice serves to prevent potential development and performance issues arising from certain operators being unsupported by deployment devices. As a result, it guarantees the usability and portability of the entire mechanism.
Pdf: /pdf/897622685fcefe0d123e7d8112e1d700211d7b8c.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: The paper presents a real-time object detection method for the YOLO series, introducing a 'Gather-and-Distribute' (GD) mechanism. Despite achieving good results on the COCO dataset, the paper lacks significant novelty and doesn't significantly advance multi-scale feature fusion or FPN-based methods. A deeper comparison with prior work could strengthen its scientific contribution.
Strengths: 1. The paper exhibits commendable clarity with well-written content and lucidly presented figures, making it easy to understand.
2. Performance-wise, the proposed contribution demonstrates impressive results on the COCO dataset in terms of both accuracy and computational efficiency.
3. The introduction of the 'Gather-and-Distribute' (GD) mechanism is particularly striking. This method enhances multi-scale feature fusion capabilities, striking an excellent balance between latency and accuracy across different model scales.
Weaknesses: Weaknesses:
1. Despite achieving strong results on the COCO dataset in terms of accuracy and efficiency, the paper lacks substantial scientific novelty. The method, while technically sound, doesn't significantly advance the field.
2. The manuscript's focus is on real-time object detection and multi-scale features for the YOLO-Series. However, the related work section needs to delve more into multi-scale features.
3. The concepts of Low/High-FAM and the lightweight adjacent layer fusion (LAF) module are not new in the field, having been discussed in M2Det [1] and [2] respectively.
Suggestions:
1. The authors need to restructure the related work section to better represent and compare with multi-scale features[1] or FPN-based methods [2-8].
2. Adding more comparative analyses and surveys related to multi-scale features and FPN-based methods will establish their work as more than a minor modification of previous works.
3. Despite YOLO-Series being known for speed and efficiency, GD-YOLO appears to be slower than the baseline (YOLOV6: v3). The authors should address this discrepancy.
4. To reiterate, the work seems to be more application-focused with minimal contribution to the field. I recommend the authors address the points above to enhance their scientific contribution, which may change the review score. Otherwise, the work might not meet the standards of a top-tier conference.
[1] Zhao, Q., Sheng, T., Wang, Y., Tang, Z., Chen, Y., Cai, L., & Ling, H. (2019, July). M2det: A single-shot object detector based on multi-level feature pyramid network. In Proceedings of the AAAI conference on artificial intelligence (Vol. 33, No. 01, pp. 9259-9266).
[2] Chen, P. Y., Hsieh, J. W., Wang, C. Y., & Liao, H. Y. M. (2020). Recursive hybrid fusion pyramid network for real-time small object detection on embedded devices. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (pp. 402-403).
[3] Quan, Y., Zhang, D., Zhang, L., & Tang, J. (2023). Centralized feature pyramid for object detection. IEEE Transactions on Image Processing.
[4]Yang, G., Lei, J., Zhu, Z., Cheng, S., Feng, Z., & Liang, R. (2023). AFPN: Asymptotic Feature Pyramid Network for Object Detection. arXiv preprint arXiv:2306.15988.
[5] Jin, Z., Yu, D., Song, L., Yuan, Z., & Yu, L. (2022, October). You should look at all objects. In European Conference on Computer Vision (pp. 332-349). Cham: Springer Nature Switzerland.
[6] Chen, Q., Wang, Y., Yang, T., Zhang, X., Cheng, J., & Sun, J. (2021). You only look one-level feature. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (pp. 13039-13048).
[7] Jin, Z., Liu, B., Chu, Q., & Yu, N. (2020). SAFNet: A semi-anchor-free network with enhanced feature pyramid for object detection. IEEE Transactions on Image Processing, 29, 9445-9457.
[8] Chen, P. Y., Chang, M. C., Hsieh, J. W., & Chen, Y. S. (2021). Parallel residual bi-fusion feature pyramid network for accurate single-shot object detection. IEEE Transactions on Image Processing, 30, 9099-9111.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: 1. Could you elaborate on why YOLOv7-E6E has been chosen as the "L" model for comparison, given that its size is 1280? It seems that YOLOv7-X would be a more suitable choice for a fair comparison.
2. The relevance of the section "Masked image modeling pre-training" isn't clear, as it doesn't appear to directly relate to multi-scale features and FPN-based methods. Additionally, the results from this section don't seem to add significant value. Would it be more beneficial to include additional comparisons or ablation studies for multi-scale features in lieu of this section?
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 2 fair
Limitations: I want to highlight the author's commendable acknowledgment of the potential military applications of their model. They clearly state their commitment to prevent such uses. This level of social impact consideration is laudable and sets a positive precedent for responsible research conduct.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ## Weaknesses
### W-1: The paper lacks substantial scientific novelty, and doesn't significantly advance the field.
Thank you for your suggestions. Our core contribution lies in the proposal of the Gather-and-Distribute Mechanism (GD mechanism), which is fast and effective. Our method gathers the global information, and distribute this information into differtnt levels. The GD mechanism is performed in both high dimension and low dimension to fully enhance information exchange. By separately considering low-dimensional and high-dimensional information, we have constructed a global information fusion mechanism, thereby unifying the approaches for promoting information flow between different levels feature that were previously disparate. Our proposed GD mechanism represents a more comprehensive improvement over previous SOTA detectors.
The GD mechanism is a general concept and can be applied beyond YOLOs. We have extend GD mechanism to other models and obtain significant improvement as show in the Experiment-1, you can find in global response.
### W-2: The related work section needs to delve more into multi-scale features.
Thank you for your suggestions. We’ll include more discussion with related works on multi-scale features in the final version. The specific content please refer to the Discuss-1 in global response.
### W-3: The issue of similarity between the FAM/LAF modules and M2Det [1] / [2].
Thank you for raising the question. Rather than focusing on improving specific operators or local structures, we propose a new Gather-and-Distribute (GD) mechanism compared to the traditional FPN structure, without relying on the characteristics of special operators.
We’ll include the discussion with M2Det [1] and RHF-Net [2] in our final version. The specific content please refer to the Discuss-2 in global response.
## Suggestions
### S-1: The authors need to restructure the related work section to better represent and compare with multi-scale features[1] or FPN-based methods [2-8].
Thank you for your suggestions. We will follow your advice and incorporate a more in-depth investigation and analysis of multi-scale features in the revised version. Additionally, we will cite [1-8] and other relevant articles to comprehensively outline the historical development of the relevant field.
We’ll include more discussion with related works on multi-scale features in the final version. The specific content please refer to the Discuss-1 in global response.
In addition, we tested the inference speed of some of the open source models on the V100 and plotted the accuracy-speed curve. It proves the excellent performance of our model. The experimental results can be found in Experiment-2. And the Figure 2 can be found in the global response PDF.
### S-2: Adding more comparative analyses and surveys related to multi-scale features and FPN-based methods will establish their work as more than a minor modification of previous works.
Thank you for your suggestions. The specific content please refer to the Discuss-1 in global response.
### S-3: GD-YOLO appears to be slower than the baseline (YOLOV6: v3)
Thank you for your feedback. In the comparison with previous generations of YOLO models, our focus lies more on the trade-off between speed and accuracy improvements, rather than solely aiming for the utmost speed or precision in a single model. Of course, we have developed a smaller and faster model (GD-YOLO-N-2), and the specific performance is detailed in the table below:
| model|FPS-32|AP|AP:50|
|-|-|-|-|
|YOLOv6-3.0-N|1187|37.0 / 37.5|52.7 / 53.1|
|GD-YOLO-N-2|1211|38.09 / 38.41|54.92 / 55.14 |
GD-YOLO-N is built by reducing the parameter count in the neck section (removing the LAF module and High-GD branch), resulting in a smaller and faster model, GD-YOLO-N-2. This model outperforms the current SOTA YOLOv6-3.0-N in both accuracy and speed. The reason we present results for GD-YOLO-N in the article, rather than GD-YOLO-N-2, is to maintain consistent structures across models of different sizes and ensure result continuity.
### S-4: The work seems to be more application-focused with minimal contribution to the field.
Thanks for your suggestion. Our contribution is not minimal since we propose a new scheme called GD mechanism, which is a general concept and can be applied beyond YOLOs (details can be found in the responses in W-1/2). Moreover, we think our paper is proper for NeurIPS and appeals to the research community, as the Call For Papers in NeurIPS23 stated “We invite submissions presenting new and original research on topics including but not limited to the following: Applications (e.g., vision, language, speech and audio)”.
## Questions
### Q-1: Why YOLOv7-E6E has been chosen as the "L" model for comparison
Thank you for sharing the information. Below is the test result for YOLOv7-X whose performance is poorer than YOLOv7-E6E. A more intuitive Figure 1 can be found in the global response PDF.
|model|FPS-32|AP|AP:50|
|-|-|-|-|
|YOLOv7-X|73.92|52.9|71.2|
### Q-2: The value and relevance of pre-training
Thank you for your question. We would like to emphasize that the use of MIM pre-training is not meant to have a direct correlation with Gather-and-Distribute methods. Instead, it's intended to act as a practical and robust technique in real-world applications, enhancing the overall performance of the model without adding to the inference latency. To the best of our knowledge, we are the pioneers in applying MIM pre-training to the YOLO series. The experimental results presented in the appendix demonstrate the advantages of pre-training with ImageNet classification.
---
Rebuttal 2:
Title: End of the discussion window approaching
Comment: Dear anonymous reviewers,
Thank you for your constructive comments and valuable suggestions to improve this paper. If you have any more questions, we would be glad to discuss them with you.
Thank you very much.
Best regards, Author
---
Rebuttal 3:
Comment: I appreciate that the authors addressed all of my concerns. They've acknowledged potential shortcomings regarding scientific novelty and have effectively cited related work to provide context. Moreover, the exploration of the motivation behind GD, as well as comprehensive discussions on FPN and multi-scale features, enhance the paper's value. The additional experimental results shed more light on its contributions, especially concerning multi-scale features.
Additionally, the authors have incorporated some of my suggestions and presented convincing results.
However, I would still recommend removing the section titled "Masked image modeling pre-training." While this topic is interesting, it might be more fitting for a separate paper. For this work, focusing on the contributions of the multi-scale feature would be more appropriate.
To summarize, I am satisfied with the authors' responses in the rebuttal. The comparison of various multi-scale feature methods, paired with thorough experimentation, bolsters the paper's scientific credibility. Given these considerations, I have adjusted my rating to "Borderline Accept."
---
Rebuttal Comment 3.1:
Comment: Dear anonymous reviewers,
We sincerely appreciate you taking time to review our responses and contributing to improve this paper. We will carefully follow reviewer's advice to incorporate the addressed points in updated version.
Best regards, Author | null | null | null | null | null | null |
The CLIP Model is Secretly an Image-to-Prompt Converter | Accept (poster) | Summary: This paper demonstrates that the CLIP model in Stable Diffusion inherently possesses the ability to convert images into text prompts, which can be achieved by utilizing a linear projection matrix that is calculated in a closed form.
Strengths: 1. The motivation of image-to-prompt conversion is clear and the proposed closed-form method is concise.
2. The capability of image-to-prompt conversion can be further enhanced by finetuning the model with a small amount of data and time.
3. The paper demonstrates various applications of the method.
Weaknesses: 1. The main concern lies in the experiments section:
(1) The quantitative comparison between SD-IPC and SD-IPC-FT can be given. And the quantitative and qualitative results when separately fine-tuning the two specific types of parameters (cross-attention layers or deep prompts) also can be given.
(2) Why not SD-IPC adopts the backbone of SD-R, i.e., Stable Diffusion v2.1 ?
(3) In Sec.4.3, I suggest the paper adopts the dataset and edited prompt used by DreamBooth (https://github.com/google/dreambooth) to make a full comparison. Both qualitative and quantitative results should be presented. And the proposed method can also be compared with more methods (e.g., DreamBooth and Textual Inversion).
(4) It would be better if the quantitative results of the first ablation experiment are presented.
2. The motivation for applying deep prompts tuning in SD-IPC-FT needs to be further explained.
3. The editability of the proposed method may be limited.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: Please see the weaknesses. I am willing to improve my score if the concerns are addressed.
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 2 fair
Contribution: 2 fair
Limitations: The limitations have been described in the paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the constructive comments. The common questions are first answered in the **General Responses**, then we clarify specific questions.
___
### Q1: The quantitative comparison between SD-IPC and SD-IPC-FT can be given. And the quantitative and qualitative results when separately fine-tuning the two specific types of parameters (cross-attention layers or deep prompts) also can be given.
We appreciate your valuable comment. For the comprehensive ablation studies, we perform quantitative tests on image variation and text-edited image variation using images from the benchmark in **General Responses Q3**. DINO and CLIP-I are computed between the generated images and real images. For image variation, we calculate the CLIP-T between the generated image and the prompt "This is a photo of [Class Name].". For text-edited image variation, we use the editing text as the prompt, such as "A [Class Name] with a mountain in the background.". We report the results of individually fine-tuning the two parts, referred to SD-IPC-FT (C) for the CLIP part and SD-IPC-FT (U) for the U-Net part.
The qualitative results of image variation are in Rebuttal Figure 2, here is the quantitative results:
| Method | DINO | CLIP-I | CLIP-T |
| :--- | :---: | :---: | :---: |
SD-IPC | 44.60 | 77.44 | 25.47 |
SD-IPC-FT (C) | 49.11 | 76.51 | 25.82 |
SD-IPC-FT (U) | 48.53 | 79.06 | **26.17** |
SD-IPC-FT | **52.03** | **79.59** | 25.90 |
The qualitative results of text-edited image variation are Rebuttal Figure 3, the quantitative results are below:
| Method | DINO | CLIP-I | CLIP-T |
| :--- | :---: | :---: | :---: |
SD-IPC | 31.09 | 68.66 | 26.84 |
SD-IPC-FT (C) | 29.10 | 67.03 | 27.99 |
SD-IPC-FT (U) | 35.21 | 69.99 | 28.56 |
SD-IPC-FT | **40.28** | **71.97** | **28.69** |
As depicted in the two tables, fine-tuning both the CLIP prompts and U-Net layers contribute to achieving better results.
___
### Q2: Why not SD-IPC adopts the backbone of SD-R, i.e., Stable Diffusion v2.1?
We appreciate your suggestion. Initially, we followed Custom Diffusion [1], which employs Stable Diffusion v1.4, as a starting point for developing our method. SD-R [2] was made available in March 2023, subsequent to our method's development. In response to your point, we have ascertained that SD-IPC is compatible with Stable Diffusion v2.1 as well, and the variation results can be found in Rebuttal Figure 5.
___
### Q3: In Sec.4.3, I suggest the paper adopts the dataset and edited prompt used by DreamBooth to make a full comparison. Both qualitative and quantitative results should be presented.
Thank you for pointing out this issue, we have addressed this matter in **General Responses Q2**, where we present the experiments of customized generation. The qualitative comparisons are in Rebuttal Figure 1. We will incorporate these results into our final version and provide more visual results in the supplementary material.
___
### Q4: It would be better if the quantitative results of the first ablation experiment are presented.
Thank you for the constructive comments. The ablation study is reported in **Q1**.
___
### Q5: The motivation for applying deep prompts tuning in SD-IPC-FT needs to be further explained.
Given our limited training data consisting of only 100 images for fine-tuning the Stable Diffusion [1], directly adjusting every parameter could potentially result in severe overfitting and the loss of previously acquired knowledge. In light of this, a solution emerged from prior research [3], where deep prompt-tuning exhibited to be able to resist overfitting. On the other hand, the fine-tuning aims to address inferior CLIP feature, and to change the focus on different aspects of the feature (object, scene, or portrait). Consequently, we have chosen to adopt this approach.
___
[1] Kumari, Nupur, et al. "Multi-concept customization of text-to-image diffusion." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2023.
[2] Rombach, Robin, et al. "High-resolution image synthesis with latent diffusion models." Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2022.
[3] Zhou, Ziqin, et al. "Zegclip: Towards adapting clip for zero-shot semantic segmentation." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2023.
---
Rebuttal Comment 1.1:
Comment: Dear Reviewer hEWJ:
Thank you for reviewing our paper. Just a friendly reminder that **the author-reviewer discussion will close soon**, and we eagerly await your feedback. In response to your comment, we've updated the comprehensive ablation studies in (text-edited) image variation tasks. Additionally, we have presented the updated quantitative and qualitative results from the Customized Generation Benchmark. Could you please take a look at these updates?
We're here to discuss any more questions or concerns you may have about our paper.
With warm regards,
Authors | Summary: This paper focuses on the problem of generating images by a reference image w/ or w/o further text guidance. The core of this problem partly lies in how to convert the images to embeddings that can be directly feed into Stable Diffusion model. To this end, the authors propose to leverage CLIP model, the text encoder of which is also part of the stable diffusion model. They compare results with Stable Diffusion Re-imagination, as well as Custom diffusion.
Strengths: The strengths are as follows:
- This is a timely interesting topic. How to generate new images by a referring image has been an interesting problem, especially given the recent burst of diffusion models. Compared to previous method such as Dreambooth, this method is light-weight. Authors even provided a version without any fine-tuning, though the performance may not be that good. But the authors also provide a easy-to-finetune pipleline which seems to largely improve the performance
- The training cost is much lower. Compared to SD-R which requires 200,000 GPU-hours, this method here reduced the cost to only 100 images with 1 GPU-hour. Compared to Custom Diffusion which requires 250 iterations of fine-tuning, this paper only required ~30 steps.
- This paper provides the generation capability both w/ or w/o text, in addition to the reference image. This is a nice property.
- The results look reasonable.
Weaknesses: The first thing to improve might be the writing.
(1) The abstract part can be improved. When I first read the abstract without knowing the main text, I was a bit struggling to understand which exactly problem this paper is particularly trying to solve. Converting image to text? Improving Diffusion? or Image-to-Image generation? This part can be clearer.
(2) There is actually not much contents discussing related work, despite the existence of section 2. I would recommend authors have a separate related section, that discuss prior work sufficiently. For example, the current 2.3 section only discussed SD-R and just mentioned dreambooth, textual inversion, and custom diffusion. But there are more works, such as instructpix2pix, plug-and-play, etc. The similarities and differences can be articulated. Besides, much background contents in section 2 cam actually be cut, e.g., line 81-84.
(3) The formulation in Section 3 can be improved. E.g., where is the 76 comes from? This is not obvious for people unfamiliar with the CLIP details. When you combine text prompt and converted image prompt in equation (8), how do you do that clearly?
(4) you can have a figure illustrating the fine-tuning process in section 3.2
The next question I have is that, can this method be generally applied to other text-to-image diffusion models. It seems to me that this paper exploits the prior knowledge that stable diffusion used CLIP text encoder, thus you can use text image encoder to convert image to text prompt. This is fine, but would this still work with other approaches such as imagen that relies on T5 model? e.g. dreambooth is agnostic to the text encoder and thus is more generic.
While the authors claimed this method is cost effective compared to the the SD-R trained on millions of images and over 200,000 GPU hours, has the author measured the out-of-distribution capability sufficiently? e.g. SD-R may just train once, and generalize to all images, but this method needs to be trained for each domain.
Detailed question:
(1) it seems the way you combine text and image prompts in eq (8) is adhoc, what if the text tokens are longer than 76, then where do you put those $f_{txt}^{comb}$? This seems technically problematic.
(2) For line 133, you assumed the norm to be 27 as a constant, which seems also problematic to me. This feels not rigorous. One better way to justify this is to visualize the histograms of the norms.
(3) for imagenet, cele-A, and places fine-tuning using equation (9), how do you get the text, image pair for the $L_{text}$?
Results section:
(1) directly compared the SD-IPC-FT w/ text version with SD-R is apple-to-orange. The SD-R just uses image as guidance, you should compare the SD-IPC ( w/ or w/o FT) version that only uses images as well. You can have w/ and w/o text results both there.
(2) No quantitative comparison between SD-IPC-FT with custom diffusion.
(3) seems SD-R without text input is still better in Table 2?
(4) custom diffusion accept multiple concepts, but seems this method is not able to.
(5) In figure 9, for SD-IPC-FC, what if you initialize the fc layer from the closed form inverse projection, but make it trainable?
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: see above
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 2 fair
Limitations: I guess two obvious limitation are:
(1) may only work for text-to-image models that relies on CLIP encoder
(2) need to get re-trained every time for a new domain.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your suggestion. The common questions are first answered in **General Responses**. Below please find the responses to specific comments.
___
### Q1: The abstract part can be improved.
We appreciate your suggestion. We will rewrite the abstract to highlight the relationship of the proposed methods, and the relationship with other methods, as summarized in **General Responses Q1**.
___
### Q2: Separate related section, that discuss prior work sufficiently.
We will incorporate a discussion in the final version. We have included some quantitative experiments in **General Responses Q2**. It's important to note that, as explained in **General Responses Q1**, our method is primarily focus on image variation (producing similar images to the reference image) and customized image generation. As a result, direct comparisons with image editing techniques such as instructpix2pix, plug-and-play may not be applicable.
___
### Q3: Questions about the text token length.
The number 76 derives from the CLIP, where the text's maximum length is 77 (token index is from 0 to 76). The sentence would be padded or truncated to length of 77. This is also followed by the Stable Diffusion [1]. In the context of Eq.(8), the combination is computed by adding the end-token embedding with a weight $\alpha$ to the converted image embedding, as $f_{txt}^{comb} = f_{txt}^{cnvrt} + \alpha \cdot f_{txt}^{t,\left\langle {eos} \right\rangle }$. After the combination, the $f_{txt}^{comb}$ would replace the end-token and all pad-tokens.
___
### Q4: Figure illustrating the fine-tuning process.
We appreciate your valuable suggestion. The framework of our method has been presented in the supplementary material Section A. In response to your comment, we will improve the figures to provide clearer insights.
___
### Q5: Can this method be generally applied to other text-to-image diffusion models?
Our method is compatible with the diffusion model utilizing CLIP as text encoder. As Stable Diffusion is now the dominating text-to-image generation model, our work has sufficient impact to the related research community.
___
### Q6: SD-R may just train once, but this method needs to be trained for each domain.
Our method shows out-of-distribution generalization in experiment results. Notably, the testing images featured in the paper remain entirely separate from the fine-tuned images. Furthermore, the fine-tuning of SD-IPC-FT on object, scene, and portrait serves to **define what to preserve from the reference image**. This is a **distinct advantage of our method** as image variation inherently has some ambiguity in deciding what to preserve from the reference image. In contrast, SD-R lacks this capability, as shown in Paper Figure 4. SD-R struggles to preserve scene information in the "Bar" example (it still focuses on the crowd). In other words, our fine-tuning is **not on individual domains**, but rather to **addressing diverse requirements of preservation**.
___
### Q7: Visualize the histograms of the norms.
The histograms depicting the norms of embeddings can be found in Rebuttal Figure 4. Specifically, the norms of a randomly chosen set of 1,000 texts ranges from 26.5 to 29.
___
### Q8: For imagenet, celeb-A, and places fine-tuning, how do you get the text?
The texts of ImageNet and Places365 are class name prompt "This is a photo of [Class Name].". For portrait, we utilize the MM-CelebA-HQ dataset which has image captions for each portrait.
___
### Q9: Directly compared the SD-IPC-FT w/ text version with SD-R is apple-to-orange.
We have indeed compared both SD-IPC-FT **w/ text** and SD-R **w/ text** in Paper Figure 5 and Figure 6, where the samples labeled as "Without Editing" showcase results only from images, while the other samples encompass edited text, such as "On the beach" and "Under the starry sky". It is evident from figures that our SD-IPC-FT surpasses SD-R in terms of editing performance. For a quantitative comparison, refer to **General Responses Q3**. Apart from the "Wearing a hat" instance of the male in Paper Figure 6, SD-R exhibits minor edits. This circumstance might mislead the reviewer to think them as SD-R results without editing.
___
### Q10: No quantitative comparison between SD-IPC-FT with custom diffusion.
Thank you for your suggestion. We have reported the quantitative comparison in **General Responses Q2** and visual results in Rebuttal Figure 1.
___
### Q11: SD-R without text input is still better in Table 2.
In Table 2, we aimed to illustrate that the training-free SD-IPC achieves comparable results to SD w/ Text (Line. 216). Notably, excessively high CLIP score may indicate a lack of variation, while too low score suggests fidelity loss in the generated image. The SD w/ Text effectively encapsulates all desired semantics from the reference image. Our method achieves a similar score to SD w/ Text, demonstrating its ability of substantial variation while maintaining sufficient semantics. Our method may exhibit lower image quality than SD-R, due to the backbone difference, V1.4 vs. V2.1, ours still outperforms SD-R in editing performance, as evident in **General Responses Q3**. Also shown in Rebuttal Figure 5, our method works for SD V2.1.
___
### Q12: For SD-IPC-FC, initialize the fc layer from the closed form inverse projection.
Following the benchmark in **General Responses Q3**, we evaluate the image variation, where the CLIP-T is between the generated image and a prompt "This is a photo of [Class Name].". SD-IPC-FC (I) is to initialize the FC layer with inverse projection. The initialization can alleviate the overfitting, but only with limited effectiveness. Visual results are in Rebuttal Figure 2, quantitative comparisons are as followed:
| Method | DINO | CLIP-I | CLIP-T |
| :--- | :---: | :---: | :---: |
SD-IPC | 44.60 | 77.44 | 25.47 |
SD-IPC-FT | **52.03** | **79.59** | **25.90** |
SD-IPC-FC | 24.71 | 64.96 | 23.76 |
SD-IPC-FC (I) | 46.55 | 76.54 | 25.78 |
___
---
Rebuttal Comment 1.1:
Comment: Dear Reviewer aD4y:
Thank you for reviewing our paper. Just a friendly reminder that **the author-reviewer discussion will close soon**, and we eagerly await your feedback. In response to your comment, we've updated the quantitative and qualitative results of the Customized Generation Benchmark, which highlights our method's advantage in terms of fewer training iterations. We also answered the questions about details in your comments. Besides, the misunderstanding of the apple-to-orange comparison is clarified. Could you please take a look at these updates?
We're here to discuss any more questions or concerns you may have about our paper.
With warm regards,
Authors
---
Rebuttal Comment 1.2:
Title: Thank your for the clarification
Comment: Dear Authors,
Thank you for the rebuttal and the efforts to clarify my questions. This does address some of my concerns.
I think my concern still centers around the Q5 and Q6.
> Our method is compatible with the diffusion model utilizing CLIP as text encoder. As Stable Diffusion is now the dominating text-to-image generation model, our work has sufficient impact to the related research community.
I think what you said makes sense, and I agree that Stable Diffusion is a major impact at this point. But the concern is also valid, as others are emerging/has emerged, e.g., Deep-Floyd/IF and Imagen, which used T5 embedding.
> Notably, the testing images featured in the paper remain entirely separate from the fine-tuned images
Are they from different domains or the same domain?
> In contrast, SD-R lacks this capability, as shown in Paper Figure 4. SD-R struggles to preserve scene information in the "Bar" example (it still focuses on the crowd).
I hold a different opinion about Figure 4. It seems obvious to me that, SD-R is better than SD-IFC (yours). Of course SD-IFC is not fine-tuned, but it changed the bear to a baby, and not as loyal to the reference image as SD-R in terms of pose and angles. In terms of the "Bar" example, I actually felt SD-R is better as it maintains the vibe while SD-IFC switched it to a family gathering feeling.
Q: can you release your code and model if the paper is accepted?
---
Reply to Comment 1.2.1:
Comment: Dear Reviewer aD4y,
We appreciate your reply, For your concerns.
1. We agree with your point. Our method is rooted in the relationship between CLIP and Stable Diffusion. The current solution is not directly applicable to T5-style models. However, some parts of our methods, e.g., the conversion from CLIP image embedding to text embedding, and the importance of a good initialization, might offer inspiration to models or fields beyond the Stable Diffusion.
2. The test images belong to the same domain (ImageNet, Places, and CelebA) as the 100 images used for fine-tuning, yet they are from different classes. We also tried to generate CelebA images with ImageNet fine-tuned model and vice versa, it shows that they are able to produce reasonable results. We could report some results in the final supplementary material.
3. We want to clarify that Paper Figure 4 DOES NOT intend to argue SD-IPC outperforms SD-R, it signifies the superior performance of SD-IPC-FT compared to SD-R and SD-IPC. SD-IPC only serves as an initial experiment that motivates the development of our better method. We are aware of its limitations, for example, they can fail to discriminate the semantically related concepts, like "kids" and "teddy bear". We have mentioned this in Line 219 in the original paper.
4. Yes, we will release the code and demo if the paper is accepted.
With warm regards, Authors | Summary: This paper titled presents a method called Stable Diffusion Image-to-Prompt Conversion (SD-IPC) that leverages the inherent capabilities of the Contrastive Language-Image Pre-Training (CLIP) model to convert images into text prompts for image generation tasks. The authors start from the analysis that the control of image generation through text is primarily influenced by the embedding of the end-of-sentence (EOS) token, and masking other word tokens does not significantly affect the quality of image generation. From this constatation, they derive a closed-form projection matrix that converts visual embeddings into text embeddings and use them to control the Stable Diffusion image generation process.
Strengths: **Technical soundness** The paper is well written and the explanation of the underlying mechanisms of the CLIP model and its relationship with image generation is made intuitive.
**Simple and clever visual-to-prompt embedding conversion** The paper presents a straightforward yet ingenious approach to convert images into text prompts by inverting the textual projection layer. This conversion allows for image variations and editing, effectively bridging the gap between images and textual prompts. The simplicity and cleverness of this method contribute to its practicality and usability.
**Efficient variation model learning through fine-tuning** The authors propose two methods to enhance the quality and flexibility of image-to-prompt conversion. The first method involves parameter-efficient tuning using a small amount of data, requiring minimal computational resources and time. This approach enables efficient learning of the variation model, making it practical for real-world applications.
**Visually sound generations and variations** The paper showcases visually sound image generations and variations achieved through the proposed SD-IPC method. By leveraging the inherent capabilities of the CLIP model, the generated images align well with the desired concepts and demonstrate high-quality results.
Weaknesses: The method is only compared with SD-R and the inclusion of additional comparisons with existing methods could have further strengthened the paper in particular for the evaluation of the editing capability of the method.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: As the authors said, the approximation of the inversion of the textual projection matrix may be a source of suboptimal results. How does this impact the generation process?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 4 excellent
Limitations: The authors show some limitations in their work including the need for coherence between the editing prompt and the target image or the lack of multiple target images.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the positive comments. Some common questions are first addressed in the **General Responses**, followed by answers to individual reviews.
___
### Q1: Additional comparisons with existing methods could have further strengthened the paper in particular for the evaluation of the editing capability of the method.
Thanks for your suggestion. The experiments of customized generation have been detailed in **General Responses Q2**. Both quantitative and qualitative results will be incorporated into our final version.
___
### Q2: As the authors said, the approximation of the inversion of the textual projection matrix may be a source of suboptimal results. How does this impact the generation process?
We express our gratitude to the reviewer for raising this important concern. The discrepancy between the projected visual feature and its textual counterpart indeed introduces confusion between concepts. As evident in Paper Figure 4 and Line 219$\sim$221, an image of a "teddybear" was transformed into a picture of a "kid", as also demonstrated in Supplementary Material Figure 15, where a "bear" became a "tiger". The primary driver of this confusion stems from the inherent modality gap between images and text. Our CLIP experiments have revealed the cosine similarity between ground-truth image-text pairs is only around 0.3, which signifies the alignment of features is not perfect. This modality gap also persists after the inverse matrix projection, leading to the perplexing shifts between closely related concepts.
Motivated by this gap, we propose SD-IPC-FT approach to reduce the gap via fine-tuning partial parameters of the SD model.
___
---
Rebuttal Comment 1.1:
Comment: Thank you for your detailed answers.
I am happy with the paper and will keep my rating. | Summary: This paper demonstrates that the CLIP model, used in Stable Diffusion, inherently possesses the ability to convert images into text prompts. They achieve this by utilizing a linear projection matrix calculated in a closed form. Furthermore, the paper shows that this capability can be enhanced by incorporating a small amount of similar-domain training data or performing online training steps on reference images. These approaches offer a simple and flexible solution to bridge the gap between images and text prompts, enabling more effective interaction between the two modalities.
Strengths: 1. This paper is well organized and the motivation is good. The basic idea of Image-to-Prompt Conversion is simple but interesting. I like the idea of using a closed-form projection matrix to converts the visual embeddings into the semantic prompt which can help control the stable diffusion.
2. The insights about start-/end-token's attention map for stable diffusion are impressive.
3. The proposed methods' finetuning time of customized generation is significantly faster than the current methods.
Weaknesses: 1. Although the basic idea of the paper is cool, the visual results of SD-IPC and SD-IPC-FT do not have sufficient preponderance over than previous methods.
2. This paper only compares proposed methods with Customized Generation and SD-R. I think the authors should also add experiments to compare the SD-IPC-FT with DreamBooth (with and w/o LoRA), textual inversion and Reference-only module for controlnet.
3. As shown in Table 2, SD-IPC achieves worst FID which means they output the most unrealistic methods. Besides, the CLIP-Score are also not good enough. I think authors should add more explanations of these tables in the SD-IPC part in section 4.2. Aslo, more comparisons should be added in these tables.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. Since SD-IPC and textual inversion all aims to convert the visual embeddings into the text prompt, detailed comparison with it should be included in this part. From my point of view, SD-IPC is just a simplified textual inversion method.
2. For Customized Generation, more comparisons should be provided in this paper.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: Not applicable.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the constructive comments. The common questions are first answered in **General Responses**, then we clarify questions from individual review.
___
### Q1: The visual results of SD-IPC and SD-IPC-FT do not have sufficient preponderance over than previous methods.
SD-IPC-FT surpasses prior methods like SD-R [1] in terms of editing performance. SD-R [1] occasionally struggles with effective editing, as shown in Paper Figure 5, 6, and Rebuttal Figure 3. Quantitative results in **General Responses Q3** reinforce this point, with our method achieving an editing fidelity (CLIP-T) of 28.69, compared to SD-R [1] at 26.01. Notably, even the training-free SD-IPC attains a CLIP-T score of 26.84.
___
### Q2: I think the authors should also add experiments to compare the SD-IPC-FT with DreamBooth (with and w/o LoRA), textual inversion and Reference-only module for controlnet.
Thank you for pointing out this issue, we have addressed this matter in **General Responses Q2**, where we present the experiments of customized generation. Our SD-IPC-CT is comprehensively compared with DreamBooth [2], Textual Inversion [3], and Custom Diffusion [4]. The visual results are depicted in Rebuttal Figure 1. We will incorporate these results into our final version and provide more visual results in the supplementary material. We acknowledge your interest regarding a comparison with the recently introduced Reference-only controlnet. However, we must clarify that this falls outside the scope of our paper, as the Reference-only controlnet was released in May 2023, coinciding with the time of our submission.
___
### Q3: As shown in Table 2, SD-IPC achieves worst FID which means they output the most unrealistic methods. Besides, the CLIP-Score are also not good enough.
Thank you for the comment, it appears that the reviewer misunderstood the purpose of Table 2: Table 2 is used to demonstrate the effectiveness of the training-free image-to-prompt conversion equation, which severs as the initialization of SD-IPC-FT and SD-IPC-CT.
In Table 2, we can see that SD-IPC achieves comparable results to SD with ground-truth text (Line. 216). The CLIP score, which compares generated and reference images, quantifies their "content distance". It's important for the CLIP score to fall within a reasonable range, excessively high scores may indicate a lack of variation, (For example, if the generated image is identical to the to-be-varied image (no variation effect), it will result in the highest CLIP score) while too low scores suggest fidelity loss in the generated image. When referring to SD with ground-truth text prompt (SD w/ Text), it means generating images using the text-prompt of reference image. Consequently, the generated result effectively encapsulates all desired semantics from the reference image. Our method achieving a similar CLIP score to SD w/ Text demonstrates its ability to generate substantial variation while maintaining sufficient semantics. While our method may exhibit lower image quality than SD-R (partly due to the backbone difference, V1.4 vs. V2.1), our approach still outperforms SD-R in editing performance, as evident in **General Responses Q3**. As shown in Rebuttal Figure 5, our method also works for Stable Diffusion V2.1, we will update our results in the new version.
___
### Q4: From my point of view, SD-IPC is just a simplified textual inversion method.
As stated in **General Responses Q1**, SD-IPC and SD-IPC-FT are designed for image variation generation, focusing on generating similar semantics as the reference, e.g., similar scene or object. Textual Inversion [3] aims customized image generation, preserving the reference object's identity. Notably, our SD-IPC-CT also enables fast customized image generation. Furthermore, **General Responses Q2** expands more comparisons beyond the results shown in Figure 7 of the current paper.
___
### Q5: For Customized Generation, more comparisons should be provided in this paper.
We appreciate your suggestion. The experiments related to customized generation have been detailed in **General Responses Q2**. Both quantitative and qualitative results will be incorporated into our final version.
___
[1] Rombach, Robin, et al. "High-resolution image synthesis with latent diffusion models." Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2022.
[2] Ruiz, Nataniel, et al. "Dreambooth: Fine tuning text-to-image diffusion models for subject-driven generation." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2023.
[3] Gal, Rinon, et al. "An Image is Worth One Word: Personalizing Text-to-Image Generation using Textual Inversion." The Eleventh International Conference on Learning Representations. 2022.
[4] Kumari, Nupur, et al. "Multi-concept customization of text-to-image diffusion." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2023.
---
Rebuttal Comment 1.1:
Comment: Dear Reviewer rWok:
Thank you for reviewing our paper. Just a friendly reminder that **the author-reviewer discussion will close soon**, and we eagerly await your feedback. In response to your comment, we've updated the quantitative and qualitative results of the Customized Generation Benchmark, which highlights our method's advantage in terms of fewer training iterations. Besides, we also explained how it's different from the Textual Inversion. Could you please take a look at these updates?
We're here to discuss any more questions or concerns you may have about our paper.
With warm regards,
Authors | Rebuttal 1:
Rebuttal: # General Responses
We thank all reviewers for your thoughtful and detailed feedback, which is of great importance to our work. Here we address some common issues and share some findings that all reviewers might have interests in.
___
### Q1: Clarification of certain facets of the proposed method.
This paper presents a novel finding that allows for the direct transformation of an image into a textual prompt embedding in the CLIP model [1]. Leveraging this finding, we formulate a range of methods with distinct advantages and applications. In the subsequent sections, we highlight the certain facets of our methods:
Method | Application | Comparisons | Merits | Relationship |
:--- | :--- | :--- | :--- | :--- |
SD-IPC | Image Variation | SD-R [2] | Training-free, Demonstrate the finding | Serving as the initiation of SD-IPC-FT and SD-IPC-CT |
SD-IPC-FT | Image Variation | SD-R [2] | Lightweight training, Able to customize what to preserve from reference images | Tuning SD-IPC offline, Better edit effect than SD-R [2] |
SD-IPC-CT | Customized Generation | DreamBooth [3], Textual Inversion [4], Custom Diffusion [5] | Fast online training, Better generation quality with few training iterations | Tuning SD-IPC online |
___
### Q2: Results of customized generation benchmark and the comparison with common methods.
Following the benchmark proposed in DreamBooth [3], we employ a dataset comprising 30 subjects belonging to 15 different classes for the fine-tuning. Each subject is represented by 4$\sim$6 images, and will be edited using a total of 25 texts for 4 times, getting a results of 3,000 images. DINO and CLIP-I score are computed for subject fidelity (preservation of subject identity details), CLIP-T score is for prompt fidelity (the editing performance). We compare our SD-IPC-CT with three common customized generation methods: DreamBooth [3], Textual Inversion [4], and Custom Diffusion [5]. The visual results are depicted in Figure 1 of Rebuttal PDF, the quantitative results obtained are as followed:
| Method | DINO | CLIP-I | CLIP-T | Comments |
| :--- | :---: | :---: | :---: | :--- |
| DreamBooth [3] | **60.11** | **77.78** | 25.81 | **Good Identity**, Weak Editing |
| Textual Inversion [4] | 25.11 | 62.44 | 29.53 | Weak Identity, **Good Editing** |
| Custom Diffusion [5] | 39.67 | 68.37 | **30.90** | Weak Identity, **Good Editing** |
| SD-IPC-CT (Ours) | 50.25 | 74.59 | 28.14 | **Good Identity**, **Good Editing** |
The result demonstrates that DreamBooth [3] obtains the highest DINO/CLIP-I scores. However, its CLIP-T score is significantly lower, indicating unsatisfactory editing performance. This is also apparent in the visual results, where DreamBooth [3] generated images closely resemble the training images with minor edits. In contrast, Textual Inversion [4] and Custom Diffusion [5] exhibit strong CLIP-T scores, albeit much lower DINO/CLIP-I scores, highlighting their weakness in preserving subject details. In comparison, our SD-IPC-CT method strikes a good balance between subject identity preservation and editing performance.
___
### Q3: The advantage of the proposed method over SD-R [2].
In the current version of paper, we highlight SD-IPC-FT's advantage over SD-R [2] in Paper Figure 5, Figure 6, demonstrating the superior editing performance of SD-IPC-FT. Here we add a quantitative experiment to valid this claim. Specifically, we fine-tune SD-IPC-FT using 100 ImageNet [6] images, ensuring it retains object-level details from a reference image. For testing, we utilize the total of 158 images from DreamBooth benchmark, featuring 30 subjects. To conduct the text-edited image variation task, we randomly select an editing text from DreamBooth benchmark for each test image. Editing performance is evaluated using CLIP-T score. Importantly, DINO and CLIP-I scores are omitted as evaluation since they indicate similarity, which does not reflect the quality of "variation''. For example, if the generated image is identical to the to-be-varied image (no variation effect), it will result in the highest DINO and CLIP-I score. The visual results are shown in Rebuttal Figure 3 (the SD-IPC-FT (C) for only fine-tuning the CLIP part and SD-IPC-FT (U) for only fine-tuning the U-Net part, this two are for ablation study). The quantitative results are as followed:
| Method | CLIP-T |
| :--- | :---: |
SD-IPC | 26.84 |
SD-IPC-FT | **28.69** |
SD-R [2] | 26.01 |
As seen, our method achieving a higher CLIP-T score than SD-R. Furthermore, we include the training-free SD-IPC for comparison, revealing even SD-IPC slightly outperforming SD-R in CLIP-T score.
___
[1] Radford, Alec, et al. "Learning transferable visual models from natural language supervision." International conference on machine learning. PMLR, 2021.
[2] Rombach, Robin, et al. "High-resolution image synthesis with latent diffusion models." Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2022.
[3] Ruiz, Nataniel, et al. "Dreambooth: Fine tuning text-to-image diffusion models for subject-driven generation." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2023.
[4] Gal, Rinon, et al. "An Image is Worth One Word: Personalizing Text-to-Image Generation using Textual Inversion." The Eleventh International Conference on Learning Representations. 2022.
[5] Kumari, Nupur, et al. "Multi-concept customization of text-to-image diffusion." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2023.
[6] Deng, Jia, et al. "Imagenet: A large-scale hierarchical image database." 2009 IEEE conference on computer vision and pattern recognition. Ieee, 2009.
Pdf: /pdf/243a926a98e5a0d1223085d073fe9d44d052e36a.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
A New Linear Scaling Rule for Differentially Private Hyperparameter Optimization | Reject | Summary: This paper presents a novel hyperparameter tuning method in the presence of a privacy budget: linearly extrapolating from observations with very low privacy loss.
Strengths: The core technique presented here is certainly interesting and deserving of future study. The paper tackles an issue which is often unaddressed in the literature on training DP models: that of choosing hyperparameters subject to a privacy budget. This problem itself is also deserving of further study.
Weaknesses: * A primarily empirical paper will live and die with the strength of its baselines (as well as its upper bounds in a case like this one where upper bounds on the efficacy of the technique can be computed). The baselines here are insufficiently strong, and do not seem to reflect the statements in the cited papers. The core technique _could_ be a component of a strong paper, but this paper is not it.
* Some baseline issues: the citation problems with [51], [52] (detailed below). Lack of comparison to the 'naive baseline' of directly applying gaussian mechanism to results of grid search, say given known training statistics / optimal hparam values for nonprivate datasets (to avoid infinite regress, and here not so much of a problem since the experiments are all focused on public feature extractor settings). Lack of clear comparison to the 'upper bound' of _forgetting_ about the privacy cost of hparam search, which _should_ be an upper bound in _all_ scenarios considered here (IE, performing a sufficiently large grid search directly targeted at the problem at hand).
* On [51]/[52], I see the reporeted CIFAR10 numbers from [51] as 98.8\% at $\epsilon=1$ and 98.9 at $\epsilon=\infty$ (table 1 of [the arxiv version](https://arxiv.org/pdf/2211.13403.pdf)). Is there a typo in figure 2 of the paper under submission? Similarly, [51] seems to claim 88.1\% and 90.6\% at the $\epsilon=1, \infty$ for CIFAR-100. I uncovered these discrepancies since the paper under submission seemed to present implausibly strong results to me--e.g. it should be _impossible_ to achieve at epsilon=1 what none of the cited papers achieved at epsilon=\infty just by tuning hyperparameters (see figure 2).
* The statements of timing on Imagenet seem wrong? The cited paper [51] seems to be pointing to a version from Nov 2022, clicking through to [52] seems to show a version uploaded in May 2022--so where are Jan 2023 and 'within the last month' coming from?
* Some more consideration required in decomposition of $r$--do we know that random decomposition 'is enough'? Presumably it's not, since we _can_ generate an $\eta$ for which the problem will presumably diverge?
Technical Quality: 1 poor
Clarity: 3 good
Questions for Authors: The major questions I have for the authors here are the sources for both the claims on timing in relation to ImageNet (see weaknesses above) and the issues in citations (particularly with [51] and [52]).
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 1 poor
Presentation: 3 good
Contribution: 3 good
Limitations: Societal impact not immediately applicable.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > The core technique presented here is certainly interesting and deserving of future study.
We appreciate the reviewer's recognition that we have chosen to study an important problem. The majority of our rebuttal will engage with the main stated weakness, that we do not compare with the right numbers in the cited works or consider the right baselines.
> A primarily empirical paper will live and die with the strength of its baselines (as well as its upper bounds in a case like this one where upper bounds on the efficacy of the technique can be computed). The baselines here are insufficiently strong, and do not seem to reflect the statements in the cited papers.
We now explain that we already compare to the baselines recommended by the reviewer, and compare to the right numbers in the cited works. As a reminder, we provide Thm 2.3 that holds for the linear probing we consider for CV tasks.
> Lack of comparison to the 'naive baseline' ... Lack of clear comparison to the 'upper bound' of forgetting about the privacy cost of hparam search ...
The 'upper bound' grid search that does not consider the privacy cost of hparam search is the exact 'upper bound' considered in Figure 3; we compare the performance of the linear scaling rule to the 'upper bound' grid search. That is, for the x-axis point labeled 1.0 (eps=1.0), we evaluate 100 hyperparameter combinations for grid search with eps=1.0 (see Fig 19/20) and do not consider the privacy cost of any of them but the optimal hyperparameter combination. As noted in line 184, we do not compare to the 'naive baseline' because it will distort the x-axis too much, and we feel the linear scaling rule is sufficiently competitive with the 'upper bound'. If we were to chart the 'naive baseline' then the x-axis would stretch to eps=10 since the privacy cost would grow with $\sqrt{trials}$.
> On [51]/[52], I see the reported CIFAR10 numbers from [51] as 98.8% at eps=1
and 98.9 at eps=∞ (table 1 of the arxiv version).
The referenced numbers are from models pretrained on JFT, a massive proprietary Google dataset that we cannot access (note the rightmost column of Table 1 in [51]). In the caption of Figure 2 we specify that we are considered models pretrained on ImageNet-21k, and therefore use the numbers from the middle row in Table 3/Table 4.
> Is there a typo in figure 2 of the paper under submission?
We don't believe there is a typo here.
> Similarly, [51] seems to claim 88.1% and 90.6% at the eps=1,∞ for CIFAR-100.
Similarly, these numbers are for pretraining on JFT, which is an unfair comparison since JFT is much much much larger (3 billion images) than ImageNet-21k.
> I uncovered these discrepancies since the paper under submission seemed to present implausibly strong results to me...
We provide all the code necessary to reproduce our results in the first line of the Appendix (also made available to the AC) and we urge the reviewer to choose any of the models in Figure 5 (see utils.py), extract the features, and do linear probing with the given hyperparameters; it will only take a few minutes. The improvements over the results reported in cited papers stem using better models and more optimal hyperparameters. For example if we use the same beit model as [7] we can in fact improve over their best results as shown in Figure 5. It's entirely possible that the papers did not consider the best hyperparameters, because the search space is vast (learning rate * schedule * epochs * batch size * momentum * optimizer * parameters to optimize * initialization).
> The statements of timing on Imagenet seem wrong? ...
We are quoting the timing and numbers from the published versions not the Arxiv versions, but we cited the Arxiv version in the bibtex; thank you for bringing this error in the bibtex to our attention, we will make sure to fix it. The published versions are public on OpenReview; [51] https://openreview.net/forum?id=Uu8WwCFpQv [52] https://openreview.net/forum?id=Cj6pLclmwT.
We provided these times in the caption of Figure 4 in order to provide a rough timeline on 'SOTA' for any future readers, because we assume that 'SOTA' will advance quickly. We never claim in the main paper that the gap between our method and [51] is due to the recency of the work, but rather because they pretrain on a much better dataset than anything we have access to. Furthermore, we provide results on full fine-tuning language tasks in addition to ImageNet, whereas their proposed method (DP-FC, [51]) explicitly only works for linear probing on vision tasks.
> Some more consideration required in decomposition of r...
Based on the heatmaps in the Appendix, specifically Figure 19, we can see that the decomposition does not hold for r=eta (single-step), but this is not a critical issue because [52] uses single-step exclusively, it is just suboptimal. It's true that if r=1000 we definitely don't want to use eta=1000, but as we note in line 107 we are not really interested in the behavior with very large values of epsilon. The exact routine we used in decomposing r is a simple nested for loop, that iterates over randomly shuffled arrays of valid epoch and learning rate values and checks whether their product is within some tolerance of the given r.
> The major questions I have for the authors...
We hope that the clarification on the publication date of [51, 52] at TMLR rather than the Arxiv date can address the first question (we only put these times in the caption of Figure 4 in order to provide a rough timeline on 'SOTA' for any future readers, because we assume that 'SOTA' will advance quickly). For the second question, we hope that the clarification that we are comparing our ImageNet-21k results to the ImageNet-21k results in [51, 52] explains the numbers in Figure 2. We believe that comparing our ImageNet-21k results to their JFT results would be unfair because we do not have the ability to evaluate on JFT, and nor do any researchers outside of Google.
---
Rebuttal Comment 1.1:
Comment: I appreciate the authors' high-spirited response. However, I believe the issues with figure 2 remain too strong to alter my score.
Fundamentally, the reason that I am unwilling to move is the fact that $\epsilon=\infty$ should _always be able to beat, or at least match, any DP hyperparameter tuning method_. Therefore, given that the authors seem to have reproduced these results from scratch rather than quoting them, many questions remain on the methods of generating the numbers in figure 2.
---
Reply to Comment 1.1.1:
Comment: We appreciate the reviewers' diligent efforts in providing insightful comments that are helpful in enhancing the quality of our paper. In the forthcoming revisions, we will craft a more persuasive discourse to assure that our results are valid and not 'too good to be true'. To summarize, the improvements are largely attributed to enhancements in the pretrained models, which can achieve impressive performance even in zero-shot scenarios.
All numbers in Figure 2 are quoted from the papers where they are drawn. For [51], the numbers are drawn from their Table 3, second multirow (ImageNet-21k), column 5 ($\varepsilon=1.0$), row 5 (their best result with their DP-FC method) which is 96.3, and for the $\varepsilon=\infty$ it is from the column titled 'non-private' in that same multirow (96.6).
We obtain better results even compared to the $\varepsilon=\infty$ numbers in other papers via a combination of better models and better hyperparameters. We compare a range of models in Figure 5. Some of these perform better than others, and prior work also uses different models for linear probing. For example, [7] uses beit, [51] uses vit-L, and [15] use WRN-40-4. As we note in line 252, the best model for DP fine-tuning is also the model with the highest non-private accuracy as reported by [78].
Furthermore, we have provided the code necessary to reproduce our results in a matter of minutes (shared with the AC); even running the code for 10 seconds should be sufficient to verify our claims.
---
Reply to Comment 1.1.2:
Comment: As the discussion period is drawing to a close, we would like to thank you for your efforts in reviewing the paper. As shown by the extensive discussion with other reviewers, which resulted in an overall score increase of 3 points across the other reviews, we were able to clarify that the improvements shown by our method over prior work is largely due to the use of better pretrained models. When we compare to a work that uses the same pretrained model that we do such as [7], the improvement is due to better hyperparameters as found by our method and the tricks we propose in Appendix A.2. There are other (unpublished) papers that also improve over the same pretrained model in [7] as we discussed with reviewer kJWb and we will add more entries to our Table 2 to compare with those.
In your initial review you mentioned a lack of comparison to other baselines. We hope our initial response clarified that we did compare to the upper bound (non private grid search) in the main paper. We also want to draw your attention to the discussion on the main AC comment thread, the response to reviewer kJWb, and the response to reviewer BCCV, where we do fair comparisons to three other baselines: random search, the method in [3], and the recent/concurrent method in [4]. We improve upon all these by multiple percentage points. For example, the error rate reduction between random search and our method as compared to the oracle is about 70% (exact numbers are in that other comment). We will include all these additional baseline comparisons in the camera ready.
We will incorporate the extensive discussion with other reviewers into the camera ready to ensure that our improvements do not come across as 'too good to be true'. We would be grateful if you can leave a final response based on these discussions. | Summary: The paper proposes a linear scaling rule for finding the optimal value of the learning rate and number of training steps for differentially private SGD (DP-SGD). The idea is simple, small amount of privacy budgets are allocated for two initial DP learning rate optimization procedures, and then the values are extrapolated to bigger epsilon-values using linear scaling (as a function of epsilon). The work is mostly experimental, and the experimental results e.g. with CIFAR-10 show that for epsilon between 0 and 1, the scaling rule seems to nicely fit the optimal values found by the grid search.
Strengths: - The idea seems very interesting and novel.
- The paper is mostly written well and is easily readable.
Weaknesses: - The technique is restricted to optimising the learning rate and the length of the training. I wonder if similar extrapolation (perhaps more generally polynomial extrapolation) could be used to find optimal the optimal hyperparameter values for other hyperparameters.
- The technical part could be written more carefully. It remains unclear whether you use RDP or GDP. The hyperparameter tuning cost of the method by Papernot and Steinke is in terms of RDP, but you list theoretical results in terms of GDP. In the end of Alg. 1 you write that the total cost is "$\varepsilon_f + \varepsilon_0 + \varepsilon_1$". Is that approximate DP? In case you use the classical composition result where you just add up the privacy parameters, what happens to the $\delta$-parameters?
- Some conclusions are a vaguely formulated/confusing. On p. 7 you have the subtitle "Linear Scaling is robust to distribution shifts", but then you seem to show and also claim in the subsequent text that DP itself is robust to distribution shifts. Somehow the message is vague here.
- The contribution remains too thin in my opinion. There is really no theoretical or even heuristic explanation for the proposed scaling rule. There two theoretical results given, a GDP composition result (which is well known and should be cited as such) and another result of which importance I find difficult to judge.
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors:
What does the following line in Alg. 1 mean: "Use privacy loss variable accounting to calibrate noise parameter $\sigma$ given $\varepsilon$" ?
- I somehow find it hard to believe that you would get 99 percent test accuracy for CIFAR-10 with $\varepsilon=1.0$. The SOTA results by De at al. ("Unlocking High-Accuracy Differentially Private Image Classification through Scale") are somewhere at 95 percent for $\varepsilon=1.0$. What made you to get this good results?
- You mention in the description of the method that you scale up to $\varepsilon=1.0$. Why is that? You seem to use varying $\varepsilon$-values in the experiments.
There seem to be typos here and there, please go through the writing carefully. Here few examples:
line 387-388: " treat < 10% of the private training dataset and public" -> "as public"
line 394: "We provide find that our method attains..."
I don't quite understand the following sentence:
"The key assumption in DP fine-tuning is that there is no privacy leakage between public data and private data."
You mean that there are not too big distribution shifts?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 2 fair
Limitations: Some of the limitations are discussed in Section 5 but it could be expanded I think.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer. We address concerns about scope and clarity of contributions, and explain how we improve over the prior SOTA.
>The technique
There are a number of hyperparameters for DP; clipping norm, batch size, momentum, optimizer, what parameters to update and how to initialize them. We design our linear scaling rule around analytical optimal choices for these hyperparameters. These choices are mentioned in line 94 and 95 in the main body and detailed ablations are in Appendix A.2 (Table 1). We do not need to estimate the clipping norm because we know that unit clipping norm is optimal, same for using full batch updates, etc. The remaining hyperparameters we are left with are just the learning rate and number of iterations, and our method provides guidance on optimizing these.
>The technical
We appreciate the reviewer's careful reading of our theory in Section 2. We have incorporated these comments into the following overhaul. First we will rewrite Lines 125-134 and update the corresponding lines in Alg. 1 with the following procedure. Given a desired final $(\epsilon, \delta)$-guarantee, we will use the GDP-approx DP conversion from Corollary 2.13 in [18] to find the appropriate value of the parameter $\mu$ for GDP. Then we will allocate $\mu$ across the hyperparameter optimization runs and final run according to $\mu = \sqrt{3 \mu_{1}^{2} + 3 \mu_{2}^{2} + \mu_{f}^{2}}$, where $\mu_{f}$ is the privacy parameter for the final run that uses the hyperparameters adaptively chosen by using the linear scaling rule on the outputs of the runs with smaller privacy budgets $\mu_{1}, \mu_{2}$. Then for a given $\mu_{i}$, we will decompose this into $(T, \sigma)$ according to Proposition 2.1 in the main paper.
By writing the privacy analysis in this manner we can stay in GDP the entire time, and we hope these presentation changes will address the reviewer's feedback.
We are only using GDP for our method, and we do not use the RDP method from Papernot and Steinke. Their method requires doing a random search and cannot incorporate priors (i.e., is not adaptive like ours). The point of our work is to develop a principled heuristic, that works in practice -across 20 datasets, CV and NLP, linear models and full finetuning of transformers- such that we don't need to pay much privacy cost for tuning. Fixing a random number of runs as they do will not provide a good approximation of the optimal hyperparameters unless we pay a large privacy cost.
>The contribution
We summarize the contribution of our work in terms of experiments, theory and heuristic analysis.
As an empirical work, we provide new open-source SOTA baselines for DP research across 20 tasks that are efficient to reproduce and easy to build on top of by just downloading the pretrained models and extracting features. Our hyperparameter optimization method removes the computational burden of optimizing hyperparameters anew for DP training. We believe our work has the potential to help the community build research on the cutting edge by providing efficient and reproducible baselines.
We provide Thm 2.3 (that holds for the linear probing we do for CV tasks) as a theoretical explanation of our scaling rule and validate it empirically (Fig3/Fig4). Thm 2.3 states that when the noise is not too large and the learning rate is smaller than some data-dependent quantity, DP and non-DP linear probing converge to the same solution. The noise increases with the number of iterations T, so we scale up T and the learning rate with $\varepsilon$ while adaptively estimating them to get a good data-dependent approximation of the optimal learning rate.
We provide a heuristic explanation in line 108. Here is another: In order to satisfy indistinguishability over a larger set of final models, we must increase $\varepsilon$. The size of this set increases with the product of the learning rate and number of iterations. So we should scale these with $\varepsilon$.
As mentioned previously we will rewrite the composition analysis to properly cite the GDP composition result (the current citation is in Appendix A.5).
>I somehow
We provided everything needed to reproduce our experiments (including open source code). We encourage the reviewer to run our experiments. For CIFAR10 $\varepsilon=1.0$ 99% test accuracy comes from the combination of a better pretrained model than De et al. (see the models we evaluate in Figure 5) and better choices of the optimal hyperparameters. Other papers (Table 2) have also improved on De et al. One of these papers, Bu et al. ([7] in the main paper) uses the same pretrained model that we do (beit) but we obtain better results because we use better hyperparameters (see Table 1 in the Appendix, A.2).
>I don't
If we assume the existence of some publicly available data for pretraining and then do DP fine-tuning on the private data, it's crucial that there is no privacy leakage between the public data and private data. There is only 0 distribution shift when public = private, and this violates the key assumption (no privacy leakage because public and private data are sufficiently different) in DP fine-tuning. If the public data is so different from the private data that it can be used for pretraining without privacy leakage, there must be some distribution shift. This motivates our analysis of the robustness of DP models trained with the linear scaling rule to distribution shifts in Page 7.
>You mention
We write "eps << 1 and scaling these up to eps = 1" just to concretely fix eps, rather than writing something like "eps << eps* and scaling these up to eps = eps*" which might be confusing.
>Some conclusions
The distinction is that models trained with DP typically have less accuracy and may be vacuously more robust (there is no gap between ID and OOD when both have 0% accuracy), but we find that we can get robustness without sacrificing much accuracy. We will clarify this.
>Typos
We will fix these typos.
---
Rebuttal Comment 1.1:
Comment: Thank you for the replies! That GDP accounting formula looks correct and should be used in the revised paper. I have continued discussion in the other thread. | Summary: This study proposes a new algorithm for privately selecting hyperparameters subject to maximizing the model utility. The new algorithm draws inspiration from the linear scaling rule that suggests increasing learning rate as batch size increases. Given the number of hyperparameters in DP-SGD the proposed algorithm simply scales learning rate and number of iterations as the privacy budget increases. This introduces a new hyperparameter that is selected privately with a portion of the privacy budget while the rest is used to perform the normal hyperparameter search. The study provides brief theoretical intuition for why we can expect this linear scaling rule to more efficiently determine optimal hyperparamters compared to previous methods and extensive empirical evidence on 20 different benchmark datasets.
Strengths: - One of the first papers to demonstrate improved privacy-utility tradeoffs that takes hyperparameter tuning into account. This is substantial as the field has mainly focused on evaluating the privacy-uility tradeoff without considering the privacy cost of hyperparameter tuning. As we move towards more practical implementations, this will be necessary.
- Clever use of the linear scaling rule to perform hyperparameter search and the resulting algorithm is simple to use.
- Extensive empirical evaluation and insightful analysis. For example, very few analyses have been done on the intersection of DP and distriutional shift. Yet, this linear scaling rule that is proposed holds in the presence of distribution shift.
Weaknesses: - “We are 165 the first to show that DP-SGD is capable of learning to handle distribution shifts without using any 166 techniques from the distributionally robust optimization (DRO) literature” -> There are a couple of other papers that draw this connection. [1,2]
- Lack of comparison to other private hyperparameter selection algorithms or hyperparameter free private learning algorithms [3, 4]
- Unclear why the initial hyperparameter search can be done with such a small privacy budget even though this is a key factor driving the performance of the algorithm.
[1] Kulynych, Bogdan, et al. "What you see is what you get: Distributional generalization for algorithm design in deep learning." arXiv preprint arXiv:2204.03230 (2022): 13.
[2] Hulkund, Neha, et al. "Limits of Algorithmic Stability for Distributional Generalization." (2022).
[3] Mohapatra, Shubhankar, et al. "The role of adaptive optimizers for honest private hyperparameter selection." Proceedings of the aaai conference on artificial intelligence. Vol. 36. No. 7. 2022
[4] Koskela, Antti, and Tejas Kulkarni. "Practical differentially private hyperparameter tuning with subsampling." arXiv preprint arXiv:2301.11989 (2023).
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. What is the intuition for why the initial hyperparameter search can be done with such a small privacy budget? Is it possible to simply randomly initialize $r_0$ and achieve similar performance?
2. How does this method compare with optimization algorithms that reduce the need for hyperparameter tuning?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 4 excellent
Limitations: The paper does address the technical limitations of the paper (specifically the assumption of access to public and private data). The main improvement for the limitations is to address the comparison to other tuning algorithms or optimization algorithms that don’t require as much tuning.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > The main improvement for the limitations is to address the comparison to other tuning algorithms or optimization algorithms that don’t require as much tuning.
We appreciate the reviewer's feedback and care in bringing these papers to our attention. In this rebuttal we provide a detailed comparison to the papers you cited on DRO [1,2] and find that our method outperforms the other tuning alternatives in [3, 4].
> We are the first to show...
We appreciate you bringing these papers to our attention. We will update the contribution to read "We show that DP-SGD provides robustness to covariate, subpopulation and label distribution shifts for synthetic and natural datasets." We will also provide the following comparison of the two papers you cited.
[1] proposes DP-IS-SGD that improves the robustness of DP-SGD by removing per-sample gradient clipping (therefore removing the introduced bias but also losing the privacy guarantee; see 4.2 in [1]) and uses knowledge of the groups to sample subpopulations at different rates to improve robustness. Because our method uses DP-GD to maximize the signal-to-noise ratio of updates (Appendix A.2) and requires clipping (because our primary goal is the privacy guarantee, unlike [1] which focuses on DRO) and we do not assume knowledge of groups, we cannot make use of DP-IS-SGD.
[2] seems to be a recent work (Google shows only a rejected ICLR2023 submission) and they conclude that "[DP-SGD] is not a good candidate for improving robustness under covariate or subpopulation shift, as it comes
at a major cost to accuracy." (page 7) This conclusion runs counter to our findings, and we believe the reason is because the numerical findings from [2] are not conclusive. The error bars are very large and the results are somewhat conflicting with each other. Our interpretation of their results is that because their DP-SGD degrades accuracy, it should also increase robustness; however we find that even when DP-SGD does not degrade accuracy it still improves robustness (Figure 6 in our main paper).
> Lack of comparison to other private hyperparameter selection algorithms...
We appreciate the reviewer drawing our attention to these algorithms. We now provide a comparison. At a high level we evaluate both DPAdamWOSM [3] and the RDP tuning [4] and find that our linear scaling rule outperforms these methods, and analyze why.
We implement DPAdamWOSM [3] and tune the necessary hyperparameter T (# of epochs) between 1 and 200 and report the performance for the best value of T for a fixed $\varepsilon=1$ without accounting for the privacy cost of this tuning. On ImageNet DPAdamWOSM achieves 79% at $T=50$, which is 4% lower than our method (83% at $\eta=20, T=50$). At a high level, our linear scaling rule attempts to do a data-dependent learning rate selection, while DPAdamWOSM does a data-independent learning rate selection. It is natural that for hard tasks (ImageNet) the data-independent choice may not work well. We note that while DPAdamWOSM does not require tuning the learning rate, we still need to tune the number of epochs. Therefore, even if further tuning $T$ for DPAdamWOSM could match the utility of the linear scaling rule, it would not match the privacy guarantee. Ultimately we think these works are compatible, because we can use our hyperparameter tuning procedure to tune the number of epochs in DPAdamWOSM.
We also provide a quantitative comparison to the RDP tuning method in [4] and find that under their experimental setting, our method is 3.5% better on CIFAR10 for the same privacy cost $\varepsilon=1$. In this experimental setting they do linear probing on a ResNet20 checkpoint pretrained on CIFAR100 and achieve 67% on CIFAR10 at $\varepsilon=1$ (Figure 3, Figure 4 in [4]). We use the linear scaling rule and obtain 70.5% accuracy with $\eta=1, T=50, \varepsilon=0.9$. We have added the code to reproduce this experiment to the codebase linked in the first line of the Appendix (also provided to the AC in another comment as per the rebuttal guidelines). The reason our method outperforms [4] is that we do select $\eta, T$ adaptively and scale them with $\varepsilon$ whereas they only do a random search. [4] is specific to RDP. Our method uses GDP because PLD accounting is known to improve over RDP, but we could just as easily use RDP.
> Unclear why the initial hyperparameter search can be done with such a small privacy budget...
The initial hyperparameter search is itself a random search. However, we are just looking for one combination of hyperparameters that produces nontrivial performance. The intuition is that even if the results for 3 hyperparameter combinations are [2%, 5%, 10%] which are all very bad, the relative ordering between this still hints that the hyperparameter combination that produced 10% still lies on the optimal set of hyperparameters. If we randomly initialize r_0, we may have good performance, or (more likely) we may end up with a bad set of hyperparameters. However, the additional cost of trying a few more candidates for r_0 is negligible since the privacy cost of these runs is so low, approximately $\varepsilon=0.01$.
> How does this method compare with optimization algorithms that reduce the need for hyperparameter tuning?
Could you provide an example so that we can make a comparison?
Bib (from reviewer)
[1] Kulynych, Bogdan, et al. "What you see is what you get: Distributional generalization for algorithm design in deep learning." arXiv preprint arXiv:2204.03230 (2022): 13. [2] Hulkund, Neha, et al. "Limits of Algorithmic Stability for Distributional Generalization." (2022). [3] Mohapatra, Shubhankar, et al. "The role of adaptive optimizers for honest private hyperparameter selection." Proceedings of the aaai conference on artificial intelligence. Vol. 36. No. 7. 2022 [4] Koskela, Antti, and Tejas Kulkarni. "Practical differentially private hyperparameter tuning with subsampling." arXiv preprint arXiv:2301.11989 (2023).
---
Rebuttal Comment 1.1:
Title: Followup to Weakness #3 / Question #1 and more details on requested comparisons to other hyperparameter tuning methods
Comment: Here we provide further details for the 3rd weakness and 1st question as asked by the reviewer. We also provide some more insight on the comparisons we did with [3, 4], omitted from the previous comment due to space constraints.
> Unclear why the initial hyperparameter search can be done with such a small privacy budget even though this is a key factor driving the performance of the algorithm.
> What is the intuition for why the initial hyperparameter search can be done with such a small privacy budget? Is it possible to simply randomly initialize $r_0$ and achieve similar performance?
Please see our comment on the AC’s thread for a comparison to random search.
In `linear_scaling.py` we provide code to run the full hyperparameter search routine. In the official comment, we have provided example traces of the searched hyperparameters for CIFAR10. As per the reviewer's recommendation, we include here an analysis of what happens when we increase the privacy budget allotted to the first hyperparameter search from $\varepsilon=0.01$ to $\varepsilon=0.05$, again on CIFAR10. Naturally, allocating more budget to the first hyperparameter search means that we cannot allocate as much privacy budget to the final run. All runs are averaged across 5 independent instantiations of the full hyperparameter search procedure and we report the standard deviation.
| $\varepsilon_1$ | $\varepsilon_2$ | $\varepsilon_f$ | Accuracy (%) | Standard Deviation |
|------------------|------------------|------------------|--------------|--------------------|
| 0.01 | 0.05 | 0.97 | **98.88** | 0.01 |
| 0.05 | 0.1 | 0.96 | 98.85 | 0.03 |
| 0.05 | 0.2 | 0.9 | 98.81 | 0.01 |
We find that increasing the privacy budget allocated to the first hyperparameter search has only a negligible impact on the final utility for CIFAR10. However, a larger value of $\varepsilon_1$ may serve to stabilize the initial phase of hyperparameter search for more difficult tasks. As guidance for practitioners, we still suggest that the hyperparameter search be instantiated with a small value of $\varepsilon_1$, and if the results are no better than random chance, we can terminate the search and increase $\varepsilon_1$ by, e.g., a factor of 5 or 10 until the results become nontrivial. In the worst case, we waste only $\varepsilon_1=0.01$ or some other small privacy budget due to fully adaptive composition.
**More details on comparisons with [3,4]**
Our goal in these comparisons is to do a fair or “apples-to-apples” comparison. For [3], that means that we reimplemented their method (because they did not provide code and did not evaluate on any of the same benchmarks that we do) in `wosm_impl.py` and gave it all the optimal hyperparameter choices that we provide our methods, such as the full-batch GD setting. We are confident that the comparison to [3] is fair, and even then we do not make [3] pay for the cost of tuning the number of epochs hyperparameter in that evaluation.
For 4] (the paper that is more concurrent to ours, with their latest revision in June 2023), we do a fair comparison by running our procedure in the exact setting they report results on. Our methods are similar in that we both run hyperparameter tuning on a set of candidates. We tune the learning rate and number of epochs, leaving the batch size as the full batch setting. They tune the learning rate, number of iterations, and the subsampling ratio $\gamma$ and find that the best values of $\gamma$ are around $0.02$. In contrast, we do our experiments in the full batch setting (batch size of 50000 for CIFAR10).
To analyze why our method outperforms [4], we look to the adaptive nature of our hyperparameter search and the very small privacy cost we pay. As noted above, the privacy cost we are paying is the difference in performance of the final run between $\varepsilon=0.97$ and $\varepsilon=1$. Therefore our hyperparameter tuning is very privacy efficient, because we are able to take advantage of the superior GDP composition. By contrast [4] relies on RDP which is known to be suboptimal and they allocate more privacy budget to tuning according. This is because they just do random search. However, our search is adaptive in that we have a prior that the learning rate and iterations increase with $\varepsilon$.
We can further compare how close their final searched hyperparameters are to what our method finds, because we use the same model optimizer etc. We can look at their results from Fig. 5 where they tune learning rate; they find the same epochs as us. Their original learning rate of $\eta=0.15$ is too small, but they use subsampling. We can apply the original linear scaling rule to scale up their optimal learning rate to $\eta = 0.15 \times \frac{1}{\gamma} = 0.15 \times \frac{1}{0.02} = 7.5$. This is too large. | Summary: This paper proposes a new method to conduct hyper parameter tuning for DP stochastic gradient descent. The method is based on a linear scaling rule, with two pilot runs using small PLBs and a third run chosen based on a linear extrapolation from the first two. The pilot runs are used to establish an estimate of the interpret and the slope that the total step size r would have with the PLB. The author uses this linear scaling rules to demonstrate that it works as well as grid search in optimizing for the accuracy in a suite of benchmark tasks, and attempts to apply this rule to perform empirical analysis on the potential of making existing model architectures DP and the issue of robustness against domain shifts.
My assessment, consisting of strengths, weaknesses, and questions, can be found in the sections below.
Strengths: The best thing about this paper is that it develops a method based on an intuition that is potentially worthwhile. This intuition is captured in the small paragraph in Section 2, titled Linear Scaling is Intuitive. What the authors have proposed is essentially a dimensional reduction to the hyperparameter search, and the reason why that works, in the sense that what you end up finding may not be so far off from a greedier search, is due to the geometry where you force the updates to be more congruent with each other. The whole idea of a linear scaling would otherwise be rather unremarkable, but if the author can further develop this intuition, formalize it and expand on it, it would contribute some insight to the literature.
Weaknesses: The most damning weakness of this paper is that it is written without due care. As a consequence, the main results and the accompanying algorithm are not correct as stated. I don’t suggest that the author is not capable of presenting the correct science -- to that question I do not know the answer. However, as things stand, the paper is not ready to be published.
The presentation in the introductory and main result sections wanders seemingly fluidly between epsilon-DP, (epsilon, delta)-DP and Gaussian DP:
1. Definition 1.1 is given in the language of (epsilon, delta)-DP;
2. The DP-SGD Definition is given without a quantification of its DP guarantee at all;
3. Algorithm 1, which employs the DP-SGD given before, states that its output is epsilon-DP, where an alleged PLB accounting between epsilon and sigma is not supplied. (In reality, a delta would be needed, so the provided guarantee is incorrect to begin with.)
4. Then Proposition 2.1, which concerns Algorithm 1, gives a GDP guarantee in relation to sigma only, where sigma is not constructed as a function of epsilon (or the missing delta) in Algorithm 1;
5. Corollary 2.2 now qualifies Algorithm 1 as (epsilon, delta)-DP, with a one line proof given in the Appendix citing another work and has no substance on its own.
All of the above is confusing at best. For a standard reader, a student coming into the DP world for example, these are not pedagogically informative.
Back to Algorithm 1:
1. It contains four privacy loss budget expressions: epsilon, epsilon_0, epsilon_1, and epsilon_f. Based on the context, am I to infer that epsilon is the sum of the rest of the three?
2. The quantity r on the 12th line (beginning with Decompose). Is this a generic r, as you use it on line 7, or is it in fact referring to r* on line 9?
3. When you speak of the “decomposition” or r, what is to be found exactly -- eta given r and T (my guess), T given r and eta (please explain), or both eta and T given r (please explain as well)? If my guess is correct, then do we know that the eta found here will automatically satisfy the condition given in Theorem 2.3?
Line 143 begins with “We apply this theorem to logistic regression.” Then Line 151 continues, “While our theorem only holds for linear models…”. Nothing said between Line 143 and Line 151 constitutes a proof that Theorem 2.3 applies to linear models. This point should either be rectified with a formal analysis or deleted, so as to not be an exaggeration of contribution.
Section 3.1 is misleading and should be thoroughly rewritten to rid all expressions of “randomly”, “sample”, and “uniformly”. The author picked the experimental values. No sampling, particularly random sampling nor uniform random sampling of values took place. It is not clear to what is “r = 75” an approximation (Line 179).
In addition, based on my reading of Section 3.2 I believe it should not be presented as is. My understanding of what Section 3.2 does is that it uses the linear scaling rule proposed in this work to construct "accuracy hypotheticals” for the listed models and datasets as well as the domain shift situations, and compare those numbers with existing experimental results. If that is the case, this is a dangerous operation. The linear scaling rule, when used as a heuristic to make tuning faster, is fine as the worst that could happen is that one misses out on the most efficient model tuning. However, the way that the rule is employed in Section 3.2 it is taken as a scientific theory between epsilon and accuracy. The accuracy numbers you get from it is no different than a terribly extrapolated number based on a linear model fitted with two data points. If you really want to use the linear scaling rule to poke at the said questions, actual experiments should be conducted to confirm these extrapolations. Of course, I may have misunderstood what was actually done and in particular, whether actual experiments were performed — although if so, what would be the contribution from the linear scaling rule?
Technical Quality: 2 fair
Clarity: 1 poor
Questions for Authors: Below are a set of minor comments.
1. Please define SOTA.
2. Please also define r, eta, and T before they are used for the first time. After the first time, there is no need to always state that one is the product of the other two.
3. Lines 83-85: I understand that these are conclusions based on your empirical analysis in Sections 3.2 and 3.3. As the statements read, they cannot possibly be correct without qualifications to the specific characteristics of the models and benchmarks that you examine.
4. The fourth to last line in Algorithm 4 does not read properly.
5. Line 106: “We provide a privacy guarantee in 2.” What is 2?
6. The x axes of Figure 3 are not labeled.
7. Line 196: “…their value of r is ≈ 1000× smaller than ours.” What you mean is that “their value of r is about a thousand times smaller than ours.”
8. Line 216: “In Fig. 3 we report that following Algorithm 1 produces new state-of-the-art results for all values of ε, shown in Table 5.” This sentence seems to imply that you have applied Algorithm 1 to all Models and datasets listed in Figure 5 (which should be Table 5). This is contrary to my understanding that Section 3.2 is a thought experiment.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 1 poor
Contribution: 3 good
Limitations: As stated before, I believe the paper is written hastily to the point that the central results presented are incorrect, significantly harming the quality of the contribution and its readability. I am also concerned with the scientific merit of Section 3.2. These points are elaborated in detail in my comment section on Weaknesses.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their detailed review; in this rebuttal we will clear up major misunderstandings and provide clarifications.
The reviewer has commented that they believe our experimental analysis in Section 3.2 is a 'thought experiment' and that we did not run all the experiments. We wish to clarify the reviewer's misunderstanding -we actually ran all of the experiments. The contribution is that running these experiments with the linear scaling rule allows us to pick hyperparameters with very high utility that also accounts for the privacy cost. In comparison, prior approaches produce suboptimal hyperparameters (see e.g., the comparison between Bu et al and ours) and also don't account for the privacy cost of hyperparameter tuning. We provide extensive experimental validation for this in Section 3 and Appendix A.2 and code to reproduce all our results in the first line of the Appendix. Figure 3 corresponds to one row of the table in Figure 5. The contribution as stated in the caption of Figure 5 is that the privacy cost of running the experiment would include the privacy cost of tuning the hyperparameters because of the linear scaling rule. Ordinarily doing a grid search would have a very large privacy cost. The optimality gap between the linear scaling rule and grid search is given by Figure 3. As you noted, "the linear scaling rule [can be used] as a heuristic to make tuning faster". It is the goal of our work to make tuning faster and more private, and our experimental analysis in Section 3 shows that we succeed on both accounts.
We appreciate the reviewer's careful reading of our theory in Section 2. We have incorporated these comments into the following overhaul. First we will rewrite Lines 125-134 and update the corresponding lines in Alg. 1 with the following procedure. Given a desired final $(\epsilon, \delta)$-guarantee, we will use the GDP-approx DP conversion from Corollary 2.13 in [18] to find the appropriate value of the parameter $\mu$ for GDP. Then we will allocate $\mu$ across the hyperparameter optimization runs and final run according to $\mu = \sqrt{3 \mu_{1}^{2} + 3 \mu_{2}^{2} + \mu_{f}^{2}}$, where $\mu_{f}$ is the privacy parameter for the final run that uses the hyperparameters adaptively chosen by using the linear scaling rule on the outputs of the runs with smaller privacy budgets $\mu_{1}, \mu_{2}$. Then for a given $\mu_{i}$, we will decompose this into $(T, \sigma)$ according to Proposition 2.1 in the main paper.
By writing the privacy analysis in this manner, we can stay in GDP the entire time and we hope these presentation changes will address the reviewer's feedback.
We now address line by line weaknesses and comments. In particular, we show that our Theorem does have a proof that applies to linear models.
> The quantity r on the 12th line (beginning with Decompose). Is this a generic r, as you use it on line 7, or is it in fact referring to r* on line 9?
You are correct; this is referring to r* on line 9. We will fix this.
> When you speak...
Both eta and T given r; the function we use is just a simple nested for loop that iterates over randomly shuffled arrays of valid epoch and learning rate values and checks whether their product is within some tolerance of the given r. As you noted, it's likely that the learning rate does not satisfy the condition in Theorem 2.3, because the smoothness constant has not been calculated. We will clarify this in the camera ready. We will provide a subroutine for decomposing r in the camera ready.
> Line 143...
The proof of Theorem 2.3 can be found in Appendix A.5, as referenced on line 129. The exact line for the proof of Theorem 2.3 starts on line 855.
> Section 3.1...
We agree that without attestation for the random number generation we should omit any mention of it from the text. However, we did not cherry pick random numbers, there are a wide range of hyperparameters which produce extremely high performance on CIFAR10. If the reviewer doubts our claims, we invite the reviewer to use our provided code to evaluate some random combinations of hyperparameters to gauge the robustness, or look at our Figures 19/20. Line 179 omits the bias term (+1.25) because 76.25 would require using a learning rate of 0.7625 which is nonstandard. By nonstandard, we mean that our code has two arrays, valid_epochs and valid_lrs, and randomly shuffles these. It then iterates until a combination that is within a tolerance of r is found; in this case, 0.75 * 100 is within the tolerance (as would be 1 * 75).
> Please define SOTA.
We write out state-of-the-art in line 9 and will add a "(SOTA)" afterwards so that it serves as a definition
> Lines 83-85...
We will add the qualifiers "when the models are pretrained from public data" and "on the spread of 20 benchmarks we examine"
> The fourth to last line in Algorithm 4 does not read properly.
In this line we are substituting the unit norm clipping threshold for the DP-SGD equation and also saying we always do the full-batch update. We will explicitly say this in the text of Algorithm 1.
> Line 106...
Apologies, accidentally used \ref instead of \cref. This should refer to Proposition 2.1
> The x axes of Figure 3 are not labeled.
Apologies, the x-axis is epsilon, we will fix this.
> Line 196...
Yes, that's correct, thank you.
> Line 216...
Section 3.2 is not a thought experiment. We carried out all the experiments. You are correct that it should be Table 5; we will fix this.
> As stated before...
We hope that our clarifications (primarily on the misunderstanding that Section 3.2 is a thought experiment) can improve your score. We thank you for the detailed review.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for taking their time to respond. I have updated my rating in a favorable manner to acknowledge the attempt they have made to clarify the conceptual understanding.
I will say one more thing for the authors's own benefit. Moving forward, please do not make statements such as "If the reviewer doubts our claims, we invite the reviewer to use our provided code to evaluate..." You are not in an antagonistic relationship with the reviewers, who are merely speaking on behalf of potential readers -- consumers of the science which you are trying to produce. The onus is on you to demonstrate trustworthy work, not on the rest of the world to prove it otherwise.
This paper still has a long way to go before it can be called a good piece of literature. I have no doubt that the authors have done the hard work producing the experimental results. However, presenting the work in a way that a reader can, without your clarifying on their side, understand the correct messages is part of the job, arguably just as hard.
---
Reply to Comment 1.1.1:
Comment: We would like to apologize that our previous comment seemed antagonistic, as that was not our intention. We merely wished to bring to the reviewers' attention that we have included code, which may serve as supplementary evidence. We appreciate the reviewers' diligent efforts in providing insightful comments that are helpful in enhancing the quality of our paper. In the forthcoming revision, we will craft a more persuasive discourse to assure that our results do not seem cherrypicked or too good to be true. We will incorporate all of the reviewer's feedback to ensure our work is trustworthy and displays the correct message. We also welcome any further feedback or reservations the reviewer may still hold. | null | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
AutoGO: Automated Computation Graph Optimization for Neural Network Evolution | Accept (poster) | Summary: This paper proposed a new NAS algorithm where a performance predictor is built for acceleration. In addition, the experiments are conducted for verification.
============================
Thanks for the authors' rebuttal. Unfortunately, my concerns are still not addressed. For example, 1) the used data are not a benchmark, the reference also did not say it is a benchmark, while the authors believed it is a benchmark. I still believe the novelty of the work is very limited to the community, and cannot address the real concerns in the community.
Strengths: The search space is built on the segments of CG of DNNs.
Weaknesses: Currently, "performance predictor" is more common than "neural predictor," though they refer to the same thing.
Some claims about neural predictors are not correct. For example, "Neural predictors treat NAS benchmarks as datasets." In fact, the early works on this aspect did not use the NAS benchmarks as the dataset, and even these works were earlier than the NAS benchmarks. A baseline in this topic is [1], which is often compared with peer competitors.
[1] Sun et al., "Surrogate-assisted evolutionary deep learning using an end-to-end random forest-based performance predictor," TEVC 2020.
The predictor is built on the subgraphs minded on NAS benchmarks. To this end, the constructed predictor cannot be generalized to other search spaces and can only be used for the same search space. As a result, the novelty of the work is limited.
The whole algorithm is very similar to the existing NAS algorithms, while the difference is that existing NAS algorithms use the search space composed of architecture units, while the proposed algorithm is based on the segments of CG of a particular architecture. While the motivation is not clear, why the use of segments of CG is more suitable?
The adopted optimization is indeed a multi-objective evolutionary algorithm, while it is called "A Pareto front evolution strategy." In this case, the two objectives, i.e., accuracy and a chosen hardware-friendliness, are treated as conflicting objectives? If it is so, I would ask about the contribution of this work compared to the NAS algorithms falling into the multi-objective NAS algorithms, such as NSGA-NET.
For the example given: "a Conv 3x3 node with incoming edges from Add and BatchNorm operations and an outgoing edge to a ReLU operation as "conv2d3,in,add,batchnorm,out,relu".", the information of kernel size and stride size have been removed. This involves the encoding of architecture, and there are multiple works in this aspect. Different architecture encodings have different impacts to the performance. Clearly, the one proposed in this paper lost information regarding the architecture.
The experiments on NAS-Benchmarks are not necessary enough because their search spaces are too simple. I suggest the authors check recent works on performance predictors.
Technical Quality: 1 poor
Clarity: 2 fair
Questions for Authors: N/A
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 1 poor
Presentation: 2 fair
Contribution: 1 poor
Limitations: This paper ignores many more existing works on performance predictors, including the encoding of architectures in this topic. Compared to these existing works, the proposed algorithm in this paper has a very limit contribution.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ### W1 "Some claims about neural predictors are not correct."
Earlier works like [1] didn't use the term “NAS Benchmarks” as they were published prior to NAS-Bench-101 (which popularized the term). However, [1]'s Introduction says “The training data of the random forest are a set of data pairs, and each pair is composed of the CNN architecture and its performance.” This means the training data is still a benchmark set consisting of labeled architectures. Since 2020 benchmark has become common and usually refers to architectures labeled for performance. Our idea is to pretrain a predictor based on diverse enough families of labeled architectures (called benchmarks) so it generalizes well to other CNNs under generic CG representation.
### W2 The predictor is built on the subgraphs mined on NAS benchmarks and cannot be generalized to other search spaces.
This is not true. The input to our PSC predictor are subgraphs of available CGs which provide a general architecture encoding. We can achieve generalization because the segments consist of primitive operations (nodes), e.g., Conv
(torch.nn.Conv2d; tf.keras.layers.Conv2D), BN and ReLU, which are the smallest building blocks of CV models in general. Moreover, our segments are mined via BPE, which will first capture all possible single nodes (smallest granularity is one op) before considering any 2-node, 3-node,... subgraphs. We train the predictor on a diverse range of NAS benchmarks (80% of 21k architectures) to achieve generalization to general form of CNNs.
In Tables 3-6, we showed that our predictor is generalizable to outside of the NAS Benchmark search spaces it is trained on. We use AutoGO with our PSC predictor to optimize architectures outside of NAS-Benchmarks like ResNets/VGG/EDSR/FSRCNN and even on other tasks. This is exactly in contrast to current predictors in the literature which use encodings to specific to their own cell/backbone design space and cannot generalize.
### W3 Why the use of segments of CG is more suitable?
First, existing NAS usually defines a search space and searches in that space, e.g., like most other works, [1] uses predefined macro skeletons and op sequences, and only considers a handful of blocks from classical networks like ResNet/DenseNet and pooling blocks and allocate them into a fixed macro skeleton. AutoGO aims to do NAS in a different way, it directly edits a given network's underlying CG (which unifies any network representation) in fine-to-coarse granularity to help ML engineers to fast mutate and deploy a famous network on devices. Our data-driven and algorithm-mined segments replace manual design choices and provide flexibility to mutate a network by single ops or by up to 15-node complex subgraphs (leading to more aggressive topology changes), e.g., Fig. 8 (Supp. Section A.8) shows how 8 classical EDSR residual blocks are replaced by 3 types of bigger blocks with operations on both parallel branches. These bigger blocks are generated using our algorithm-mined segment database and are very hard to be discovered manually for the task.
### W4 the contribution compared to multi-objective NAS like NSGA-NET.
Multi-objective NAS, which represents large literature, is not our contribution here. We just use a simple evolutionary algorithm to update the Pareto front under defined multiple objectives. The contribution of AutoGO is a full framework that can directly mutate any input CNN (with our repo of ops and subgraphs discovered from data and without reinventing the search space) to yield a better one for the device (lower latency/GFLOPs and higher accuracy). Our results in Tables 3-6 show clear gains in this use case.
While an early work NSGA-NET does not repeat phases (blocks), it still puts a lot of assumptions on search space design: "for computationally tractability, we (NSGA-NET) constrain the search space such that each node in a phase carries the same sequence of operations, i.e. a 3 × 3 convolution followed by batch-normalization and ReLU." This is incompatible with NB-201 that use a ReLU-Conv-BN ordering, or EDSR which argues against the use of BN for Super Resolution altogether. Using CG, AutoGO accommodates all of these cases. Another weakness the bit string encoding in NSGA-NET loses information from original graph when mutation and crossover are done on the bit strings.
### W5 the encoding proposed in this paper lost information regarding the architecture.
The quoted sentence is not related to encoding at all. To clarify, the quoted sentence is only applicable to the segmentation process (a specific step when we convert the CG into a sequence before BPE segmentation). After segmentation, we then map the character-level segmentation back into the CG representation to partition them into subgraphs. In other words, our predictor is based on GNN trained on CGs directly, and AutoGO mutates directly on CGs (graphs) using mined segment database, which keeps all the kernel size, stride and input/output size and other information. Since this information is not lost we can do resolution propagation. It is a strength of this work that mutation is done on CGs directly (rather than on sequential encodings) without losing any information.
### W6 The experiments on NAS-Benchmarks are not necessary enough because their search spaces are too simple.
First, our experiments are not limited to NAS-Benchmarks. Tables 3-6 shows AutoGO can generalize and optimize a range of real-world CNN architectures (large or small) on diverse tasks/datasets. The goal of Table 2 is to show that AutoGO can even further optimize the “best” architectures in NAS each benchmark, which is something prior literature cannot achieve. Also, it shows that searching on manually designed search space is always not good enough, suggesting the necessity of a data-driven approach like AutoGO to directly edit the CG.
Based on these clarifications we hope the reviewer could reassess the contribution of the paper.
---
Rebuttal 2:
Comment: Dear Reviewer @AwFS,
On behalf of all authors, we thank you for your thorough review. We would appreciate it if you leave any comments on our responses.
We have carefully read your review and diligently provided explanations and clarifications that we believe address your concerns. We kindly request that you consider updating your evaluation if our responses have addressed your main concerns. We remain committed to further refining our work. Your feedback is crucial in ensuring the paper's overall quality, and we greatly appreciate your time and expertise in this matter. Thank you. | Summary: This paper presents the AutoGO framework, which operates directly on the Computation Graph (CG) of a given DNN architecture. It splits the CG into segments and conducts a search process. Through extensive experiments, the paper shows that AutoGO effectively improves the performance of the top architectures in various public architecture benchmarks. Furthermore, AutoGO demonstrates its capability to automatically optimize different types of large CNN architectures and achieve enhanced results in various computer vision tasks.
Strengths: 1. The whole framework from tokenization and mutation to estimation is reasonable and technically sound.
2. It operates the complex problem of directly processing the computation graph and verify on several difficult tasks.
Weaknesses: 1. The verification for the ImageNet task is missing.
2. The searched model is highly limited in the current segment database since it only contains the benchmark architectures which are not widely used in different tasks.
3. The training of the accuracy estimator highly relies on the existing collected accuracy and model data pairs. And it would be hard to transfer with only limited accuracy numbers for other datasets and other tasks.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. Does the node label contain channel information? The definition is mainly about the graph structure and it seems missing the channel information. And in that way, how would the custom framework adapt to the choice of width, depth, and kernel size?
2. What if the Segment Database is based on some widely used architecture including Conv, MLP, and transformers rather than the NASBenchmark architecture? Would the generated architecture still perform better than the current Pareto front?
3. The choice of the search space covers results mainly in small datasets such as CIFAR-10. How do we use these accuracies to train an accuracy estimator in other datasets or even for different tasks?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Please refer to the weakness part.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ### W1 Verification on ImageNet
We indeed provide evaluation for ImageNet tasks by training ResNet-50/101 and VGG16-BN on ImageNet and then further fine-tune these architectures on Cityscapes (Semantic Segmentation) and MPII (Human Pose Estimation). We report the results in Table 3 and list our training/fine-tuning setup in Supplementary Section A.4.
### W2 Searched models are limited as the segment database only contains benchmark architectures
Our segment database does not contain the benchmark architectures themselves, but rather the frequent subgraph segments extracted from these diverse benchmarks in a data-driven way. The segments range from 1-node primitive operations (e.g., conv, relu, pooling, add, concat, etc.), most of which are universally present in CNN architectures, up to more complex 15-node/16-edge subgraphs (e.g., the HiAML subgraph used in Figure 4) that vary greatly in terms of computations and hardware-friendliness. For instance, ResNet residual blocks consist of the 'Conv-BN-ReLU' sequence which is in our database, as is the Max Pool operation VGG use for downsampling. Our segments also generalize across different CV tasks, like EDSR [1], which do not use BatchNorm for Super Resolution and instead opt for simpler 'Conv-ReLU-Add' sequences which are also present in our database.
### W3/Q3 The training of the accuracy estimator highly relies on the existing collected accuracy and model data pairs, which is hard to transfer
In this paper we do not attempt to train a predictor to estimate the exact task and/or dataset performance of a given architecture. Note how we use Spearman's Rank Correlation Coefficient (SRCC) in Table 1 to evaluate our PSC predictor, which only considers the relative rankings of predictor outputs compared to the ground truth. Rather, our PSC predictor is designed to focus on a how specific segment $s$ in an architecture contributes to the overall performance given its position relative to the rest of the architecture (represented by $P$ and $C$). We do this in order to estimate if replacing (mutating) $s$ with a new segment $s^*$ from our database will bring performance benefits. As such, when AutoGO is optimizing an architecture CG, the predictor acts as a 'proxy' where higher predictor outputs correspond to better architectures. In our experiments on multiple CV tasks and datasets, like ImageNet Classification, Semantic Segmentation on Cityscapes, Human Pose Estimation on MPII as well as Super Resolution on DIV2K/Set5/Set15/etc., and even Denoising using a proprietary in-house dataset, the mutated, hardware-friendly architectures found by AutoGO provide superior performance which validates our approach.
### Q1 Does the node label contain channel information? How would the custom framework adapt to the choice of width, depth, and kernel size?
Yes, all CG nodes contain input/output height, width, channel (HWC) tensor size information, while nodes with learnable weights (e.g., Conv, Linear) contain weight tensor size (e.g., convolution kernel size) and a bias boolean as node features.
Therefore, when we use AutoGO to optimize a given input architecture (e.g., ResNet, VGG, EDSR, etc.), we know the HWC information of every operation node that comprises said architecture. During mutation our resolution propagation MILP attempts to adapt replacement segments $s^*$ from our database to match the input/output HWC of the predecessor $P$ and successor $C$ of the input architecture. The MILP accomplishes this by tweaking the strides of convolution/pooling nodes in order to adjust HW as well input/output channels of convolution nodes in the replacement segment $s^*$. We do not adjust the kernel size of convolution operations. Note that in some cases the MILP may not find a solution, e.g., if $s$ performs downsampling, yet $s^*$ does not contain any nodes that can perform downsampling (conv or pooling ops) the proposed mutation {$P, s^*, C$} is deemed infeasible.
### Q2 What if the Segment Database is based on some widely used architecture including Conv, MLP, and transformers rather than the NASBenchmark architecture?
Currently, our database spans a wide range of subgraphs/segments that are used in various convolution-based computer vision models. Our framework could be extended to extract subgraphs from Transformer models which can enrich our database to further improve accuracy and hardware-friendliness. This is definitely a direction for future work given the rise of attention-based structures in computer vision architectures.
References:
[1] "Enhanced Deep Residual Networks for Single Image Super-Resolution" - CVPR'17.
---
Rebuttal Comment 1.1:
Title: Post-Rebuttal Comments
Comment: I appreciate your responses addressing my concerns. I literally like the idea of automated graph optimization from segment databases although I still doubt the performance predictor cannot give an accurate performance rank. Overall, I would like to remain my current assessment.
---
Reply to Comment 1.1.1:
Comment: We appreciate the reviewer very much for the post-rebuttal comments and constructive feedback that helps to improve this paper. Our PSC predictor works in the proposed AutoGO type of subgraph mutation for several reasons.
First, our predictor uses a P-S-C segmentation scheme, which introduces a form of data augmentation where for each single CG in the training set, we can sample multiple P-S-C combinations based on where the chosen segment S is in the given CG. This P-S-C sampling strategy helped to significantly improve the predictor learning and its ranking performance. Table 1 has verified the effectiveness of the PSC predictor in ranking architectures when the ratio of sampled PSC combinations over original CGs exceeds 1:1, while the conventional way, the GNN trained on the same amount of original CGs, failed on more challenging benchmarks like HiAML, Inception and Two-Path.
Furthermore, the PSC predictor is not evaluated on absolute architecture performance. Rather, the goal of PSC predictor is to determine which segment mutation (S*) will yield performance improvement in its context (the current parent CG). In other words, our predictor is trained to be sensitive to the choice of segments and segment locations relative to the current parent CG, which suits for the need of AutoGO much better than the conventional GNN predictor. Our experimental results have verified the effectiveness of PSC predictor in capturing the gains from segment-based CG mutations when optimizing a range of architectures on real-world CV tasks.
Finally, the predictor is trained on the CGs of 5 different NAS Benchmarks, covering diverse ops and topology types. Although each individual benchmark may only cover a predefined set of ops or topological constructs, e.g., NB-101 only uses concat at the end of a cell, whereas Inception and Two-Path benchmarks also allow concat elsewhere. As another example, NB-101 uses 'Conv-BN-ReLU' (ResNet-like sequences), while NB-201 uses 'ReLU-Conv-BN', allowing us to extract 'ReLU-Conv' as a 2-node segment which is useful to EDSR optimization (no BN). In other words, by working with diverse enough benchmark sets to cover a wide range of distinct topological characteristics, we have not only constructed a useful and diverse segment database for AutoGO, but also applied a data science approach to learn from the benefits of segment mutations.
Title: Re: Post-Rebuttal Comments | Summary: This paper introduces AutoGO, an innovative method for evolving neural networks that addresses the challenges of efficiency, low power consumption, and hardware compatibility. AutoGO represents deep neural networks (DNNs) as computational graphs (CGs) comprised of low-level primitives and employs an evolutionary segment mutation algorithm. Notably, AutoGO employs subgraph mining from CGs while utilizing efficient tokenization through Byte Pair Encoding (BPE) from natural language processing (NLP) instead of Weisfeiler-Lehman (WL) kernels. For the evolutionary mutation process, AutoGO leverages neural prediction to explicitly consider positional and contextual information when replacing segments within a CG.
The experimental results demonstrate that AutoGO performs exceptionally well on NAS benchmarks and exhibits promising applications in various domains, including classification, semantic segmentation, human pose estimation, and super resolution.
In summary, this paper presents a novel and effective approach. The writing style is particularly engaging, making it a pleasure to read. I recommend accepting this paper.
Strengths:
1. The motivation behind this work is excellently articulated, providing a clear understanding of the research objectives and driving factors.
2. The paper effectively describes recent works in the field, highlighting their significance and comparing their novelty to the proposed method. The related work section is comprehensive and enjoyable to read, showcasing a thorough understanding of the existing literature.
3. This method is technically robust, demonstrating impressive performance across various evaluations. The experimental results validate its effectiveness and reliability, further strengthening the credibility of the approach.
Weaknesses:
1. Although the method presented in the paper is highly technical, the focus seems to be predominantly on the technical details rather than providing a comprehensive analysis and intuitive explanations. While I acknowledge the complexity of the method, I believe that enhancing the final manuscript with more analytical insights and intuitive discussions would significantly improve its overall quality.
2. Consider including a simplified pseudocode or algorithmic representation of the method. This would greatly facilitate the understanding of the algorithmic steps and enhance clarity for readers. A concise and structured representation of the method's flow would be beneficial in aiding comprehension.
3. I recommend revising Figure 1 to present the information in a horizontal format. This adjustment would enhance the visual clarity and make it easier for readers to follow the different components and relationships depicted in the figure.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: 1. Can the mutation algorithm effectively handle the computational demands of scaling up to large computation graphs? I'm curious to know if it can cope with the complexities involved in processing massive graphs efficiently.
2. When it comes to modeling PSC, does employing a more advanced graph neural network like Graph Transformer provide notable advantages? Or is the choice of GNN design less influential in this particular scenario?
3. Is there a possibility to substitute the mutation algorithm with a GNN policy? I'm interested to hear about any experiences or insights regarding the potential applicability of GNN policies in this context.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 4 excellent
Limitations: Please provide a limitation section for future researchers!
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: First, we will add some intuitive explanations to the manuscript and more limitation discussions. The current framework effectively solves the AutoGO problem to mutate CNNs for faster inference and hardware-friendly deployment for a range of CV tasks/networks.
Note that we do provide more information on the analytics of our segment database and a comparison with the WL-kernel of [1] for segment extraction. However, due to the page limit, we had to relegate that material to the supplementary while keeping experimental results that demonstrate the effectiveness of AutoGO on a wide range of tasks, networks and hardware-friendliness metrics in the main manuscript.
### W2/3 Simplified pseudocode and Fig. 1
Thanks for the feedback. We have provided an algorithm latex float in the PDF attached to the global response and will definitely revise Figure 1 to give it a horizontal focus.
### Q1 Handling large computation graphs.
Yes. In terms of CG size, the largest NAS benchmark we consider is Inception, where the average number of nodes per CG is 673 and the largest CG has over 1500 nodes. By contrast, the next largest benchmark is NB-201, whose largest CG has 336 nodes (half the mean number of nodes Inception has), so Inception contains many massive graphs, yet not only are we able to optimize the best Inception CG, we mine common segments across all Inception CGs to form our database. Granted, AutoGO execution time scales with number of nodes, so optimization on Inception does take longer than NB-201. In Supp. Section A.7 we provide a breakdown contrasting execution time for ResNet-50 (larger CG, 1.8min/arch mutation time) vs. EDSR (smaller with 1.5min/arch mutation time).
### Q2 Modeling PSC using a more advanced GNN like Graph Transformer
It is possible to use a more advanced, potentially attention-based GNN and see performance improvements which is an exciting avenue for future work. In this paper, we establish a framework that we could build on top by improving its components, such as the PSC predictor, the search algorithm, and the database, which could result in speedup in the whole search process and better performing mutant architectures.
### Q3 Mutation for a GNN Policy
It is interesting for future work to consider a GNN policy-based mutation approach that would model the sampling, aggregate nodes based on their importance, and better capture structural information. This could speed up the mutant-selection process and guide the search toward better candidate replacements that positively affect performance.
References:
[1] "Interpretable Neural Architecture Search via Bayesian Optimisation with Weisfeiler-Lehman Kernels" - ICLR 2021.
---
Rebuttal Comment 1.1:
Title: Post rebuttal
Comment: Thank you for the rebuttal. I read the rebuttal carefully and kept the score supporting this paper to be accepted. | Summary: The paper proposes to optimise neural networks by exploiting common subgraphs mined from existing NAS benchmarks - this is achieved by building a vocabulary from networks encoded into topologically sorted sequences and using byte-pair encoding (BPE) to obtain common sequences of operations. After that, a given neural network is segmented and different mined segments are considered as replacement for different identified segments in the given network, all done while taking care of shape propagation. Searching for the best replacement segments is done with a variation of a multi-objective evolutionary algorithm which optimizes for Pareto-efficiency using a proposed (GNN-based) PSC predictor.
Strengths: - generalizing blocks to segments is an interesting and sensible step towards more flexible NAS
- the proposed system seems technically advanced, taking care of quite a few corner cases in a convincing way (e.g., solving resolution propagation with linear programming)
- encoding neural networks as sequences and using BPE is an interesting take on representing neural networks (but could be studied in more details)
- the proposed PSC predictor seems like an interesting variation of the more standard GNN-based predictors
- experiments are designed to support claims made in the paper (but results are somewhat hard to interpret, see below)
- the method (at least after all one-time cost) seems fast, finishing within a few of hours at most
Weaknesses: - Clarity could be improved in certain places
- "First, benchmarks (...) requires training the new architecture from scratch" - why is this relevant for the presented work?
- "(...) predictors learn using high-level cell representation (...), In contrast, AutoGO can mutate an architecture (...)" - why are these two things compared to each other? The ability to mutate beyond an original design space (by the way, this is a tricky thing to formally define, I would appreciate an attempt at that) is orthogonal to a predictor's ability to capture spatial information of a network. Many NAS algorithms achieving similar (or even greater) coverage of architecture than the proposed work, e.g., LEMONADE or $\mu$NAS seem particularly relatable since they utilise mutations towards a similar goal as AutoGO (mutating away from the original design).
- there seem to be some contradictory information presented regarding what operations are used, first we read (line 43): "(...) by evolving its underlying computation graph (CG) using its original primitive operations.", but then (line 58): "A vocabulary of segments are mined from a large number of CGs from several NAS benchmarks", please clarify
- The provided definition of PSC does not seem to properly cover nodes parallel to $s_i$.
- If $s*$ has more than one input, how does the method handle assigning P's outputs to a replacement's inputs? (and analogously for S) Is it a part of the LP problem?
- it is unclear if a randomly initialized or a pretrained architecture is expected; I couldn't find any information about pretraining a network, but then line 225 says "we retrain all the segment replacements" suggesting the original segments might be trained already (?); it is also unclear why this retraining of segments is needed, considering a performance predictor is used, and how it is done
- there is some overlap between the proposed method and blockwise NAS works, such as DNA, DONNA or LANA; I think it would be better if the authors acknowledged existence of this line of work, right now it is completely ignored, despite high-level similarities
- I think the authors should discuss in more details their choice of using toposort+BPE to mine for subgraphs - this approach is bound to fail to recognize many isomorphic subgraphs as the same segments (hinted at the beginning of Section 4), why do you think this is not a problem? How does this greedy approach compare to other alternatives?
- Results are, generally speaking, hard to compare to the rest of the literature. More specifically:
- apart from the common benchmarks (NB101, NB201), the paper uses HiAML, Inception and Two-Path - to the best of my knowledge, these are only used by a single, very recent (AAAI'23) paper; however, I don't see any benefits stemming from this choice while it does make comparison to other works harder
- FSRCNN and U-Net experiments use proprietary networks and tools and, on top of that, only relative improvements are reported in some cases; rendering these experiments basically unverifiable and unusable by the community
- at the same time, some of the reported results are not particularly convincing, such as:
- baseline EDSR 2x upscaling performance is actually significantly worse than reported in the original 2017 (!) paper, $\Delta$ PSNR of: -1.25, -1.35, -0.93 and -3.79 for Set5, Set14, B100 and Urban100, respectively. Results are better for DIV2k (I have to assume the authors mean DIV2k validation set), but that's just one dataset out of 5 (and the one used for training),
- the proprietary FSRCNN also achieves significantly worse results than its parent model, while requiring approx. 150x more FLOPS (!!!)
- ResNet-50 and ResNet-101 baselines are also worse than reported in the original paper (ImageNet), not to mention any recent improved training recipes
- it is actually not very clear, but following on the information presented in Section 4.1, it appears that the results in Table 2 were obtained by using a predictor pretrained on 80% of the data available for each benchmark (at least that's the only information we are presented about predictor training, so I'm assuming that's the case for subsequent sections as well); this means that the improvements presented are actually occupied by a very high, hidden cost of having lots of in-domain training data, while in many cases they are not significant
- perhaps I missed that, but I couldn't find information about the cost of all the pretraining etc.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: See weaknesses, plus a bonus question: is there a technical reason to stick to (Chinese) characters when representing neural networks? It seems like the method could easily work with just arbitrary numbers instead, so I'm wondering if I missed something or if that's just an arbitrary decision
Also, some typos:
- line 49: replace ":" with "."?
- line 160: change to just "AutoGO uses D"? Right now this parts reads like "AutoGO uses D according to D" (since earlier it is said that "segment dataset" == "D")
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 2 fair
Contribution: 2 fair
Limitations: No discussion about limitations - the presentation is actually quite one-sided, by only considering benefits of the proposed method. For example, using BPE is said to "bring several benefits" over methods like WL but not a word about possible downsides (worse handling of isomorphic graphs).
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ### W1 Comparison to LEMONADE/uNAS/Blockwise NAS
AutoGO differs from these works. LEMONADE uses network morphism (3.1) rules “Inserting a Conv-BatchNorm-ReLU block”. AutoGO uses diverse, data-driven subgraphs of up to 15 nodes for mutation. uNAS predefines a traditional macro NAS structure in Table 1.
Blockwise NAS work in a predefined backbone structure, e.g., Fig. 2 of DONNA shows a defined search space over N blocks. DNA and LANA are both distillation teacher-student NAS frameworks that adjust layers/channels per block. The goal of AutoGO as a novel automated toolchain for ML engineers is to eliminate the predefined structure in NAS and to automatically edit the CG of an input architecture with fine-to-coarse segment vocab. It automates the inference acceleration of an architecture on the target hardware without reinventing or limiting the backbone to a manual choice.
### W2 PSC Parallel Segments/Multiple Inputs
This is handled by topological sort applied prior to BPE. Numerical labels are assigned to each node, indicating its position in a sequence. Nodes/segments parallel to S would be allocated to P or C. See Figure 6(b) in the supplementary materials and Figure 9 in our global response PDF. Also, if there are multiple valid mappings from P to S (or S to C), then AutoGO chooses one at random.
### W3 Toposort+BPE Isomorphisms/Comparisons
The choice of toposort+BPE is a tradeoff between result completeness (extract all subgraphs) and accuracy (estimate subgraph count) for efficiency (memory). While relaxing the problem into mining segments from sequences instead of graphs would hinder recognizing isomorphisms, it enables extracting subgraphs from a multiple CG families and overcome memory inefficiencies. By converting back the extracted subsequences into their original subgraph form, we are able to recognize isomorphism among extracted subgraphs. We compare to WL-Kernel and show the speedup of our approach compared to [1], who extract motifs. However, WL-kernel has disadvantages. Primarily, it does not scale well with the size of the graph in terms of RAM and execution time. This limited the number of CGs we could consider at a time so we couldn’t perform extraction over all CGs in a NAS benchmark dataset (we can with BPE). Even on a subset of graphs extraction took at least 6 hours (BPE takes less than half the time). We describe this comparison and provide a figure in Supp. Section A.3.
### W4 Results hard to compare to the rest of the literature; use of newer benchmarks
1. Prior literature finds architectures amongst a predefined search space. In Sec. 4.2, we show that even the best architectures from 5 existing NAS benchmarks can be edited by AutoGO for improved accuracy and lower FLOPS, showing the bottleneck in NAS is the manually designed search space in the first place, not the search algorithm. Therefore, the AutoGO concept is new; it can edit and mutate an existing architecture for better accuracy/faster inference, an ability that the prior works lack because they would define their own search space of backbones/blocks, which limits performance.
2. Using newer benchmarks increases the diversity/coverage of our segment database, which brings benefits to downstream tasks, e.g., Figure 4, where a HiAML block is adopted by a mutant of ResNet-50 by AutoGO.
### W5 Comparing Tables 3-5 to the original papers
We want to clarify that rather than advancing the training performance of EDSR, ResNet50/101, the goal of experiments in 4.3 (Tabs. 3, 4) is to show that even on established neural architectures and on GPUs, AutoGO can still automatically optimize these architectures for more efficient computation. This ability of automatically generating better architectures based off original architectures is no coincidence and is verified through extensive experiments on multiple networks.
The performance metrics we report are not directly comparable to the top-1 or PSNR numbers reported in the original papers for two reasons: To provide a fair comparison between our baseline architectures and the mutants optimized by AutoGO, we use a function provided by the CG API to instantiate a trainable model from the CG (as the mutant architectures returned from AutoGO use this format) which results in some implementation differences between the CG-instantiated model and the model definition in the original source code (as the CG API needs to be general enough to support many architectures). Second, our training hyperparameters (Supp. Section A.4) differ from the original papers, e.g., the original EDSR paper uses an input patch size of 48 and halves their learning rate at specific steps. We use a patch size of 64, a cosine scheduler, and train for less time.
These changes produce results which are not directly comparable to what is originally reported but not always worse, e.g., our baseline VGG16 exceeds the VGG16-BN result reported in torchvision (74.18% vs. 73.36%) and our AutoGO optimized architecture further improves this to 74.91%. Also, all of our CIFAR-10 results in Table 2 use the same training setup, and our baseline performance is higher on NB101 (94.24% vs. 95.18%) than what is originally reported.
### Bonus Question
BPE works best when applied to sequences of atomic characters, e.g., a single character like ‘A’, '1' or ‘壹’ represents a single meaningful entity (one node in the case of CGs). We originally tried representing nodes with English characters but found there were not enough of them given our “node|incoming|outgoing” format, so we simply switched to Unicode Chinese characters.
References:
[1] "Interpretable Neural Architecture Search via Bayesian Optimisation with Weisfeiler-Lehman Kernels" - ICLR 2021.
---
Rebuttal 2:
Comment: Dear Reviewer @MAXz,
On behalf of all authors, we thank you for your thorough review. We would appreciate it if you leave any comments on our responses.
We have carefully read your review and diligently provided explanations and clarifications that we believe address your concerns. We kindly request that you consider updating your evaluation if our responses have addressed your main concerns. We remain committed to further refining our work. Your feedback is crucial in ensuring the paper's overall quality, and we greatly appreciate your time and expertise in this matter. Thank you.
---
Rebuttal Comment 2.1:
Title: Reply
Comment: Thank you for your rebuttal and please accept my apologies for a late reply.
While usually I try to comment on each point individually, considering the amount of points raised by the authors I will try to summarise my thoughts in a more concise manner for the sake of brevity.
On the high-level, my concerns about comparability and clarity remain.
Regarding the former, I understand that the authors aim to improve networks and not beat SOTA, so I'm not asking for the best possible numbers in the absolute sense. My concerns are based on the fact that it is usually easier to optimise a network that is trained suboptimally. Being able to Pareto-dominate a baseline is the golden target for any optimisation work, but it is not worth that much if the baseline is unable to reproduce 8 years old results (ResNet). Also, I'm going to ignore VGG because this model is so old and suboptimal (in the sense of its architecture) that it really should not be used anymore - improving upon it is a really low bar.
What is more, we have to remember that comparability issues work in more than one way - even if we agree that the results presented in the paper are enough to convincingly assert strong performance of the proposed method, how is that going to work for potential follow up works? Comparison to FSRCNN and UNet is impossible, whereas comparison to any other model would either require reusing the same hyperparameters as in this paper (which are not used by anyone else and often seem to produce suboptimal results) or retrain models reported here in a new setting. In either way it seems the burden of comparing to this paper would be unfairly shifted onto the follow up works, which is not desired.
For the latter, I feel like the authors often mean something different than what is written. For example, in the rebuttal we can read "[AutoGO] can edit and mutate an existing architecture for better accuracy/faster inference, an ability that the prior works lack because they would define their own search space of backbones/blocks, which limits performance." I genuinely hope the authors did not mean to say their work is the first one that proposes to mutate architectures? In general, my understanding is the novelty of this work lies in the automated extraction of the segments library (and also some improvements to the predictor etc. but that's not relevant for my point now) - all the other things frequently mentioned by the authors, like the ability to mutate etc., are simply derivatives of this aspect and on their own are not qualitatively new.
Also, the authors mention on a number of occasions that the baseline method are limited because they require a predefined search space (e.g., pt. 1 in W4). Again, this is a bit misleading. Every method requires well-defined boundaries in order to be implemented - this is also the case for the proposed method (and is the source of my question regarding what operations are used, see a bullet point below). The limiting factor is not the existence of a predefined search space on its own but its size - the proposed method has the advantage of being able to span more networks designs easily (e.g., due to the automated mining of segments) but fundamentally it also utilises a predefined search space, even if defined implicitly.
Some notable specific comments:
- my question about the overall cost has not been answered
- my question about what operations are used has not been really answered - I understand the overall set of operations used is summarised in bottom right plot in Figure 5? Although we know from footnote 1 that "conv" exists in at least a couple of different variations so the information in the appendix is incomplete. Also, my comment was about contradictory information in the text - as far as I understand "using its original primitive operations" should be rather "using operations extracted from NAS benchmark". The point is: after learning BPEs from benchmarks, if I wanted to optimise a network that has completely different operations (e.g., a transformer) - will mutations be able to use operations from the model (not present in the segment database) or not? One part of the paper suggests it can happen, the other the opposite.
- my question about the amount of labelled data needed to train the proposed predictor has not been answered
- the authors say the choice to use BPE over WL is a trade-off, but they never discuss the downside of using BPEs, even after pointing this out in the review; while I understand it might not be feasible to scale WL to the level needed to run experiments, considering the authors already present comparison in terms of time (which is a win for BPE) I don't see a reason why not to include at least some approximated comparison on the other side of the trade-off (completeness)
With all this in mind, I would consider my current score to reflect my opinion well - the submitted paper is solid but overall the shortcomings make me lean towards rejection.
---
Reply to Comment 2.1.1:
Title: Re: reply
Comment: We would first like to thank the reviewer very much for taking the time to read our rebuttal and posting thoughtful comments.
Regarding the comparability issue, AutoGO is an automatic framework aimed at optimizing a given neural architecture for hardware-friendly inference. The purpose of the experiments is to demonstrate that we are able to incrementally improve a given CNN architecture toward lower latency/FLOPS/power with maintained or higher performance. For all the experiments, the original network and the AutoGO-improved network are trained on the same training recipe. The contribution lies in a Computation Graph optimization framework that does not require users to manually specify their search space like NAS would do. The framework also includes the necessary components for AutoGO to operate and function on CGs, including PSC-predictor, segment extraction method, and resolution propagation, and a maintained segment database for CG mutations. This is a novel contribution on the end-to-end system level for inference acceleration (rather than optimizing training) compared to those prior NAS works that search on expert-defined search spaces. Significant efforts were involved in developing AutoGO and demonstrating benefits on a range of representative neural networks and CV tasks, which was actually not easy to achieve.
Reviewer: “Even if we agree that results presented in the paper are enough to convincingly assert strong performance of the proposed method, how is that going to work for potential follow up works?”
The fact that AutoGO-optimized architectures outperform original architectures can be verified by comparing the architectures and training them under the same training recipe, which we did. Also, we submitted the experimental code and commit to open-sourcing the research code including our algorithms, segment database, architectures before and after applying AutoGO, which are valuable and beneficial to further studies on automated computation graph optimization and on-device DNN acceleration/deployment in real-world scenarios.
Regarding the clarity issue, please allow us to clarify that the rebuttal could contain more oral language (that is not placed in the full context) than what is in the paper for explanation purposes. While we understand what the reviewer means, we would kindly invite the reviewer to refer to the paper for more formal claims regarding our technical contributions and motivation. And this sentence from the rebuttal doesn’t mean we are the first to propose mutations to neural architectures. However, in terms of the usage in reality, AutoGO is the first of its kind, it’s a framework that can automatically edit any user-supplied CNN without requiring the user to specify the search space manually, i.e., defining cells, blocks, or backbone structures, which is an arguably onerous job required by prior NAS methods. Admittedly, we still perform search within the search space implicitly induced by our segment database for Computation Graph mutation (a point we fully agree with the reviewer). Thank you for pointing it out!
Reviewer: "The limiting factor is not the existence of a predefined search space on its own but its size - the proposed method has the advantage of being able to span more networks designs easily (e.g., due to the automated mining of segments) but fundamentally it also utilises a predefined search space, even if defined implicitly."
We definitely agree with reviewer: our algorithm-mined segment database has a coverage of diverse types and sizes of convolutional ops and structures. AutoGO is the first automated solution to editing CNNs on this induced CG search space which provides more flexibility than each manual design. We will further polish introduction, related work, conclusion to make sure this point is clarified.
Our work offers a novel angle to search space design based on data science. We show that by mining a sufficiently diverse set of NAS Benchmarks (5 here), it is feasible to apply a data science approach to CG mutation space construction to replace human-defined block construction or expert rules in NAS. We are trying to promote an alternative idea for future NAS: mining and maintaining a shared subgraph repo to evolve low-level CGs efficiently for AI deployment, instead of re-defining search spaces for the task and for device, which involves trial-and-errors and limits the application of NAS to AI deployment, especially given the rise of AI-running edge devices (as our real-world FSRCNN/Denoising UNet results are targeting). We included these results in Sec. 4.5 and do not deem them as a weakness, since these proprietary networks show the real-world use cases for AutoGO in IoT in addition to other results. And we believe our extracted segment database, which we commit to open-sourcing and maintaining constantly, is relevant and valuable to the research community for future studies on automated DNN acceleration and deployment in practice.
---
Reply to Comment 2.1.2:
Title: Addressing Some Specific Concerns
Comment: Responses to specific questions/concerns:
### Q1/3 Overall cost and amount of labelled data for predictor training:
We apologize for the negligence that answers to these questions were not properly posted in the first rebuttal although we tried to. The predictor training was not costly. To train/test the PSC predictor used throughout the paper, we use 21k labelled architecture CGs (including NB-101: 5k randomly selected CGs, which is ~1.2% of the entire NB-101; NB-201: 4096 archs or 26.2%; HiAML: 4.6k, Inception: 580; Two-Path: 6.9k). These settings and numbers are listed in Table 7 in the paper. Each CG can be decomposed into many P-S-C data samples taken from the original CGs, e.g., in Table 7 the 5k NB-101 samples become over 400k P-S-C samples but there are still at most 5k unique accuracy labels. Then, we only need 17k (80%) of the labelled architecture CGs to train the PSC predictor used in generating results for Tables 2-6. Another 2k (10%) are used in the validation set but the predictor doesn't learn from them, while we use the last 2k (10%) to evaluate the predictor in terms of SRCC in Table 1. The same predictor trained in the above way is used generate other results throughout the paper (including Tables 2-6).
The predictor training process took around a day on our hardware (described in Supp. Section A.9) and we note that much of this time is due to I/O loading/bandwidth (which a dedicated software engineer could optimize) as the PSC samples are stored in large caches. Overall, the predictor training on 80% of 21k labelled architectures was not costly.
### Q2 Operations used:
The bottom right histogram in Figure 5 provided overall statistics, which group all operations by the primitive name, e.g., “conv“ represents all convolution variations together regardless of kernel sizes which are {1, 3, 5}. The exception is depthwise convs which have kernel sizes of {3, 5}. We can also provide more detailed breakdowns of statistics if needed.
In this paper, we focus on implementing a working AutoGO framework for CNNs for CVs tasks. Our segment database is extracted from diverse enough NAS Benchmarks which covers a wide range of primitive operations which compose CNN architectures (not Transformers). However, we will refine the statement into (we mutate) “using operations extracted from NAS benchmarks” rather than mutating “using its original primitive operations” as suggested by the reviewer. In AutoGO, it is true that each mutation must use ops that appear in the segment database already. Thanks for pointing it out. The reason we said mutating “using its original primitive operations” was because here we focus only on CNN primitive ops which our extracted segment database covered as single-node segments (i.e., primitive ops). But we will certainly refine these statements taking the reviewer's suggestion into account.
### Q4 BPE limitations:
In the rebuttal reply to "W3" we mention that the choice of toposort+BPE is explicitly a trade-off between completeness and efficiency and state that while BPE is much faster (compared to WL-Kernel subgraph mining, which is actually infeasible to be executed on the CGs of all 5 benchmarks on which we performed segment extraction), its limitation is to handle isomorphisms. But this is not a big issue, since we can remove redundant segments, i.e., the isomorphic subgraphs that have different sequential encodings by toposort+BPE, in order to yield only unique subgraphs (segments), as mentioned in line 259. This filtering was fast, since we only needed to compare subgraphs that have the same number of nodes to check for any presence of isomorphism.
In BPE algorithm, the vocabulary size will critically determine how many subgraphs we can extract and the associated cost. In this paper, we strike a balance by setting vocabulary size to 2000 which results in subgraphs up to 16 nodes and edges. The segment database constructed this way together with the pertained PSC predictor under this segment database (vocabulary) is sufficient to yield benefits on the range of networks and tasks we optimized using AutoGO. However, it's worth noting that the BPE tokenization algorithm works recursively by extracting all 1-node segments, followed by all 2-node segments, and then all 3-node segments, and so on. This means our segment database can grow incrementally which is another advantage: when the current database is not sufficient, we can always try to include more complex subgraphs with more nodes/edges into consideration by further running BPE.
We would like to appreciate the reviewer's time and efforts to read our responses again and reconsider the evaluation. We hope we have addressed the reviewer's main concerns with these responses. We are committed to further refining our work and making the necessary improvements to address these concerns and any further concerns you may have. Thank you for the opportunity to enhance the quality of our research. | Rebuttal 1:
Rebuttal: We would like to thank the reviewers for their constructive comments, for the suggested references, and for pointing out several typos, which we will fix. In addition to the individual responses, we are providing a PDF containing some new figures/tables.
We would like to clarify that the motivation of AutoGO is to provide an automated framework to help ML practitioners to optimize larger networks for deployment (with faster/lighter inference) on their target hardware (usually an edge device like cell phones or cameras instead of GPUs), such as FSRCNN on Super Resolution, and a U-Net on Image Denoising to optimize PSNR and latency/power for a mobile NPU. Most existing NAS is devoted to optimizing architectures by proposing a NAS search algorithm to explore the architectures or encodings of architectures in a manually defined search space, either pushing the accuracy to the limit or performing multi-objective search. However, none of the existing works will suit deployment and inference acceleration scenario above for several reasons:
1) When deploying an ML task in reality, ML practitioners seldom resort to NAS, because the manual redesign of a search space is the hardest to decide. This is especially challenging when one has multiple devices or multiple generations of devices, one would want to quickly migrate models across devices. Rather than redesigning everything including the search space from scratch, one will take an existing famous architecture already reported by the literature on that task and start to edit it to fit into the hardware while maintaining accuracy as much as possible. That is exactly the process AutoGO tries to automate, rather than searching for another large network that beats SOTA on a CV task.
2) Given the large body of literature on NAS, the search algorithm (EA, RL, BO), the multi-objective search, and encoding of architectures, are already heavily studied--they are not the bottleneck. The bottleneck is how to come out with the search space for the task. Note that existing scientific research on NAS usually manually designs a search space that is good to boost scores and still friendly to GPUs but not necessarily friendly to other daily-used devices. AutoGO proposes a novel data science approach to this dilemma by mining segments from a diverse range of benchmarks featuring different kinds of topological features and ops and maintain a segment vocabulary. By directly operating on CG level (a fundamental representation of any network), AutoGO can directly edit any given input CNN's CG using this vocabulary (without distorting network representation), thus avoiding confining search to specific, manually designed cell/block structures or macro backbones or to any assumption. The segment vocabulary contains fine-to-coarse subgraphs which can be constantly updated to supply to AutoGO.
3) We optimize on a range of NAS benchmarks to show that AutoGO can still enhance the best architectures in them, while exhaustive search cannot, because the search space design is not sufficient. We show AutoGO can generalize to enhance high-resolution networks ResNet EDSR VGG etc. in both accuracy and inference speed even on GPU. Finally, we demonstrate the already very lightweight manually designed proprietary FSRCNN and U-Net can still be optimized by AutoGO to achieve low-power/latency inference on cellphone. These results are all challenging to achieve.
Also, we clarify some writing in the paper:
### L91: "First, benchmarks only provide performance annotations for architectures inside a manually designed fixed search space." relevance to this work? (MAXz)
If we change an architecture in NB101 by tweaking ops or channels in a cell, it will result in an architecture outside the predefined NB101 search space, whose performance can't be assessed by a predictor trained on original NB101 encodings. In contrast, our predictor can as it operates in lower-level CG space.
### What operations are used (MAXz)
We mutate an architecture using BPE-extracted segments, each composed of primitive operations like conv or relu. Segments range from all primitive operations (single nodes) to any 2-node sequences, ..., up to the largest segments in our database (Supp. Section A.1) with 15 nodes.
### Line 225 “Retrain” Typo (MAXz)
This is a typo and is supposed to be “retain”. Mutation is guided by the PSC predictor, without requiring any retraining.
### Our FSRCNN has 150x more FLOPS? (MAXz)
This is not the case. Our Proprietary FSRCNN-3,4 models in Tab. 5 are not comparable to e.g., FSRCNN-7 model in ECBSR paper. First, ours have 3 or 4 convolutional layers, while FSRCNN-7 has 7 layers (6 GFLOPs by Table 1 of the ECBSR paper) and uses a 9x9 Deconv for upsampling. But ours uses a 2x2 Deconv to make it friendly to cellphone hardware. Moreover, the FLOPs reported in Table 5 used an input image size of 64x640 which is what our power profiler allows. Also, there is a typo in Table 5 that FLOPs should be in units of 1e6, not 1e9. To clarify all of these, we provide a new Table 10 in the attached PDF with power metrics and FLOPs for input size 640x360 (which is used in ECBSR paper). At this size, our FSRCNN-3,4 base models have 2.67/3.74 GFLOPs, which is significantly less than the 6 GFLOPs of FSRCNN-7 in the ECBSR paper. Yet, the point shown by Table 5 is that AutoGO can still optimize such small models.
### FSRCNN, U-Net experiments use proprietary networks (MAXz)
AutoGO aims to assist ML engineers to auto-tune a known architecture on their target hardware (e.g., a cellphone) for faster inference. Our last results in 4.5 on FSRCNN, U-Net demonstrate this ability of AutoGO in reality, which complements our other results in 4.1-4.4 on public datasets/networks. We could only post relative changes at submission, but have been cleared to post power/lat metrics. The power values for the Table 6 U-Net are 724.59mW and 657.82mW for the original and AutoGO-optimized models, respectively.
Pdf: /pdf/d092911bebc032581531a9fc4a70fd00a243e497.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
An information-theoretic quantification of the content of communication between brain regions | Accept (poster) | Summary: In the present work, building on recent advances in Partial Information Decomposition or PID, the authors develop a novel information theoretic measure which they term Feature-specific Information Transfer (FIT) and that the authors claim can capture the feature-specific information transfer between brain regions. The authors validate FIT using synthetic and real world neural recordings.
Strengths: - Originality: The authors introduce a new measure that even though is based on recent advances on PID is still a significant novel contribution.
- Quality: the theoretical derivations are solid and there are no obvious flaws that I can see. The introduction of the novel FIT measure is backed by sufficient experimental evidence both with simulated and real world data.
- Clarity: The paper is very clearly written and the illustrations are very good.
- Significance: I think the paper introduces an important measure that can help study stimulus-driven communication between brain areas.
Weaknesses: I think that stating that the FIT measure is feature-specific can be a bit misleading. I would probably have called this measure stimulus-specific rather than feature-specific. I personally reserve the term "feature" to refer to internal neural representations that may be triggered by external stimuli. By that definition, features cannot be observed and it is very hard to attribute certain neural activity to a specific meaningful feature.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: What is the difference between feature and stimulus in this work?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Limitations have been properly addressed by the authors.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We agree that the nomenclature of the new metric is important and should be chosen with care. We are thus grateful to the Reviewer for raising the issue of how best to name it.
In the present study, we consider as “feature” any variable of interest that is external to the considered neural network. This can be the feature defining an external sensory stimulus (color, contrast, etc) or the feature defining a behavioral output (choice made by the subject, motor or kinematic variable of the subject’s behavior). While some people reserve the use of “feature” for some specific quantification of neural activity (e.g. the timing of spikes, the spike rate, etc), many people especially in vision or hearing reserve the word feature for features of a sensory stimulus. We will better define in the paper’s revision what we mean by feature. We also agree that in the current version of the paper we used the words ‘stimulus’ and ‘feature’ somehow interchangeably, leading to confusion. We will revise the text to avoid any ambiguity between ‘stimulus’ and ‘feature’. E.g. we could replace the letter ‘S’ that we currently use to refer to the external feature in the maths with an ‘F’, we could formulate better sentences such as ‘shuffling X across trials at fixed stimulus’ and replace them with more precise wording such as ‘shuffling X across trials with the same value of the feature’.
However, we would like also to consider and discuss alternatives. We like the suggestion of this referee to call the measure “Stimulus-information transmission” or “Stimulus—specific-information transmission”. However, we define the measure such that it can be applied to information contents that are more general than sensory stimuli (as explained above). In our paper, we applied FIT to compute transmission of information about behavioral choice (Fig 3) as well as applying it to compute transmission of information about sensory stimuli. Using the terminology “stimulus specific” could thus incorrectly suggest that the applicability is limited to sensory function.
Another possibility would be to use “content-specific” rather than “feature’-specific” and call the measure “Content-specific information transfer – CSIT”.
We would prefer to use the current terminology with better clarifications, as explained above. However, we are open to change if it is felt by the reviewers that a change would enhance the clarity about what the measure quantifies. We would warmly appreciate feedback from all reviewers about this issue.
We also hope that the Reviewer will appreciate the advances we presented in this rebuttal in response to the other Reviewer’s suggestions (simulation studies of source mixing, simulation studies of simultaneous encoding of multiple features, computation of DFI on real data, simulation studies scaling of FIT with data size and dimensionality, etc).
---
Rebuttal 2:
Comment: Dear Reviewer, Thanks again for your insights and suggestions, which we greatly appreciated. We wonder whether you received our rebuttal containing the clarifications you requested and the response to your suggestions for improving clarity, as well as other clarifications and extra analyses in response to suggestions of other reviewers. The week in which we can interact is getting to a close and we would really appreciate the opportunity to receive your feedback on our work. Thanks so much!
---
Rebuttal Comment 2.1:
Comment: I think that adding clarifications in the text would suffice to address my concern regarding the nomenclature used in the paper.
---
Reply to Comment 2.1.1:
Comment: Dear Reviewer, thanks again for your valuable insights, which we appreciate and which help us improving the paper. With regard to improving nomenclature, we will add the clarifications about what we mean by feature, as described in our rebuttal. | Summary: Exploring the content and direction of communication between brain regions is key to understanding the brain. This paper proposes a method called Feature-specific Information Transfer (FIT) to investigate the feature-specific content and direction of information flow in multi-region brain recordings. To isolate feature-specific information, the authors use the Partial Information Decomposition (PID) concepts of redundancy and uniqueness of information and the Wiener-Granger causality principle to find the feature-related flow of information. The authors evaluate the ability of FIT to measure feature-specific information flow using synthetic datasets. They show that FIT performs well in both cases: (1) brain regions encode stimulus with communication; (2) brain regions encode stimulus independently without actual communication occurring between them. Finally, the authors compare FIT with Directed Feature Information (DFI) and evaluate the performance of FIT by three neural recordings spanning the range of electrophysiological recordings (spiking activity, MEG, and EEG).
Strengths: * The problem formulation is clear, giving a clear definition and motivation of the problem the authors want to solve.
* The authors' exposition of their method and experiments is straightforward and comprehensive.
* The proposed method, FIT, can understand how the brain works with complex brain functions and the flow of different types of information. FIT would provide an essential set of contributions to the field.
Weaknesses: * The authors should highlight and summarize the contributions of this work at the end of Section 1.
* The comparison of FIT and Directed Feature Information (DFI) is only performed on simlulated dataset.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: * What will happen if multiple feature-related information is transferred from sender $X$ to receiver $Y$ simultaneously? One may come from stimulus, but the other may come from hidden sources, e.g., other brain regions. Could FIT distinguish different types of information flows that occur simultaneously?
* If possible, could you please discuss the stability of FIT over the increasing dimensionality of data? One major limitation of applying PID to neuroscience is the computational difficulty of estimating PID for high-dimensional neural data.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: I have highlighted technical limitations and weaknesses above. I have nothing further to add here.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Compute DFI on real data.
We computed DFI on the 3 real-data sets (Fig. R3).
In the MEG dataset (Fig. R3B), DFI was negative values and thus not interpretable as measure of information transfer. Unlike FIT, DFI could not detect that (as predicted by previous studies) stimulus information is stronger in the feedforward than in the feedback direction, and DFI could not detect that feedforward stimulus information is stronger in correct than error trials (an important result found by FIT).
In the EEG dataset (Fig. R3A), DFI was negative and thus not interpretable as measure of information transfer. The comparison of the DFI results between eye visibility features and directionally of cross-hemispheric transfer couldn’t support the conclusion (predicted by findings in previous literature and confirmed by the FIT analysis) that across-hemisphere information transfer is directional from contra- to -ipsilateral (DFI does not detected a leading direction of RE information transfer) and is feature specific (DFI does not detected a difference between LE and RE information in the R to L hemisphere communication).
In the thalamocortical spikes data (Fig. R3C), DFI has mostly positive values which are thus interpretable in terms of information transmitted. DFI confirms (though with lower statistical power) the FIT results than in both the somatosensory and visual corticothalamic pathway more information is transmitted feedforward about the corresponding sensory modality (more visual than somatosensory information transmitted from visual thalamus to visual cortex, and more somatosensory than visual information transmitted from somatosensory thalamus to somatosensory cortex). However, DFI fails to demonstrate that, as expected from well-established neurophysiological findings, more information about such simple stimulus features is transmitted from thalamus to cortex than from cortex to thalamus.
We will add these results to SM4.2 where we report the properties of DFI.
In sum, we recognize that the idea behind DFI is sound. However, the imperfect definition of redundancy used in DFI (conflating synergistic and redundant effects) leads to problems which we have already understood mathematically and characterized by simulation in the previous version of our paper. These new analyses of real data, which we will add to the revision, show that the problems predicted by theory and confirmed by simulation are also found in real data, suggesting that DFI is not robust enough to be applied to brain data, and that the advances provided by FIT are important not only conceptually but also for the analysis of empirical datasets.
Study case of information about multiple features transferred simultaneously.
In the submitted paper we already showed that FIT can reveal encoding of multiple features. Simultaneous encoding of multiple features was indeed investigated in the analysis of all three datasets (MEG, EEG, spikes). In MEG, we found simultaneous encoding and transmission of stimulus and choice information. In EEG, we found simultaneous encoding and transmission of left-eye and right-eye information. We used the relative information values results to individuate the features most encoded and transmitted (visual contrast in the MEG dataset, left eye in the EEG dataset).
We now performed, as suggested, a simulated study of encoding and transmission of multiple features (Fig R4B). We simulated two independent features (e.g. of a sensory stimulus) S1,S2 simultaneously encoded and transmitted (S1 more strongly than S2):
X=S1+D*S2+Ex
where S1,S2 are independent binary variables (±1), Ex is Gaussian noise with SD=1, and Y equals X with a time lag plus independent Gaussian noise with SD=1.
We found (Fig R4B) that FIT identifies correctly that both features are transmitted, and ranks correctly the features about which most information is transmitted. These simulations will be added to the SM of the revised paper.
Scaling of FIT with data size and dimensionality.
We added simulations that study how FIT scales with the number of trials available in the dataset and with the dimensionality of neural activity (here, the number of bins of the discretized activity). We found (Fig R4A) that FIT behaves much better than Shannon information quantities (e.g. TE) with the data size and dimensionality. The correct value of FIT, computed from large data, is achieved already with smaller number of trials than for the Shannon information quantities. We found that accurate calculations of FIT are possible with the number of trials available in empirical datasets (Fig R4A); for comparison FIT calculations in the paper were done with R=2-4). Our understanding is that the better scaling and sampling properties of FIT w.r.t. to Shannon Information quantities arise because FIT considers a PID part of the total information which has lesser bias compared to other parts of the total information. Given that the PID atoms of FIT do not contain synergistic terms, this is in line with previous work (Montemurro et al, Neural Comput 2007) showing that synergistic components of information have much larger limited sampling bias, and that information quantities that do not include synergistic components have much better sampling properties than full multivariate Shannon information quantities. Thus, FIT can be computed from the datasets in which Shannon information measures typically applied to neural data.
For completeness, we also added to our code the implementation of information theoretic limited sampling bias corrections that slightly aid (Fig R4A) the calculation of information theoretic quantities from real data by subtracting an estimate of the bias computed with a quadratic extrapolation of the scaling of information when subsampling available data (Strong et al, Phys Rev Lett 1998; Panzeri et al J Neurophysiol 2007).
Writing.
We will highlight and summarize more sharply the contributions of this work.
---
Rebuttal 2:
Comment: Dear Reviewer,
Thanks again for your insights and suggestions, which we greatly appreciated.
We wonder whether you received our rebuttal containing the results of the analysis of DFI on real data (confirming the problems with DFI already highlighted in the submitted paper using mathematical considerations and simulation), and the simulations of FIT with multiple simultaneously transmitted features (complementing the results about the transmission in the same brain regions of information about different features already presented in the submitted paper with real neural MEG/EEG/spike data), as well as other clarifications and extra analyses in response to suggestions of other reviewers. The week in which we can interact is getting to a close and we would really appreciate the opportunity to receive your feedback on our work. Thanks so much!
---
Rebuttal Comment 2.1:
Comment: Thank you for the responses and explanation. The paper has improved as a result of authors efforts to address raised questions and concerns. I have upgraded my score to 7.
---
Reply to Comment 2.1.1:
Comment: Dear Reviewer, thanks again for your valuable insights, which helped us to improve the paper significantly. Thanks also for raising to 7 the score on the basis of the extra work we performed. We are grateful for your support. | Summary: This paper proposes a novel non-parametric method aimed to quantify the amount of brain communication between (time series representing the activity of) brain regions using information theoretic measures. Concretely, the authors’ method builds on the framework of partial information decomposition of the transfer entropy (TE), which is the mutual information between the past of a presumed sending signal X and the present of a presumed receiving signal Y conditioned on its on past. This can be interpreted as the amount of predictive information the past of X contains about the present of Y that goes beyond the information contained in the past of Y itself. TE can therefore be interpreted as an operationalization of the Wiener-Granger causality principle for general non-linear/non-Gaussian data. Based on quantities defined in this framework, the authors now go one step further to quantify only that part of the directed information flow from X to Y (analogous to TE) that contains information of an external content variable S, which could for example be a stimulus in a cognitive neuroscience experiment. Another extension, conditionining the measure on a fourth, potentially confounding variable, is also presented. The authors provide extensive theoretical derivations of their metrics in a long supplement and prove several of its properties. Moreover, the method is demonstrated in a set of simulations as well as on two real electrophysiological neuroimaging experiments.
Strengths: The paper contains an enormous amount of work, which is signified by a 34page supplement. The mathematical derivations appear to be solid and comprehensive. As far as I can tell, the proposed methods are mathematically sound and the mathematical results provide a good fundament for understanding the properties of the methods. The paper and supplement are well written, and the paper is well organized. The density of the information contained in the paper is very high, and often the reader is referred to the supplement for key information, which is something to reconsider. The paper contains somewhat comprehensive simulations and two applications to real data. The figures are of very high quality.
Weaknesses: Presentation-wise I do not consider it not so helpful to put all technical details in the supplement. Quantities like shared information and FIT itself are not defined in the main body, which makes it difficult to follow the theoretical arguments. Other technical details seem to be missing completely, at least I could not find them. This concerns the calculations of the various information theoretic metrics themselves.
My main objection is that the proposed metrics are affected if the measured data are mixtures of underlying sources. This is known to be the case for electrophysiological recordings like MEG and EEG. Mixing occurs due to the propagation and superposition of electrical currents and magnetic fields through the head from the brain sources to the sensors. It is most highly pronounced when analyzing data on the M/EEG sensor level (as apparently done in the section on “Eye-specific interhemisperic information flow during face detection”). Also invasive electrophysiological recordings can exhibit strong artifacts of source mixing, if different channels are recorded relative to a common electrical references. This can lead to spurious estimates of information transfer for measures based on the concept of Granger causality (GC). A simple example to illustrate this is a single brain source activity that is measured in two channels due to source mixing. That alone would not cause Granger causality. However, if both recorded channels are affected by (to some extent) independent noise, (spurious) GC emerges, the reason for this being that past of both channels together contains more information about the present than the past of any single channel alone, since noise can be better averaged out by combining channels. Not that this behavior is neither overcome by GC’s property to model time-delayed interaction nor by conditioning on the past of the receiver.
In the presence of source mixing, GC is mainly driven by asymmetries in the mixing proportions of different signals and not so much about their interactions. These mixing proportions depend on factors such as the choice of the reference electrode.
The authors are aware of the issue as they mention: “because they were sufficiently far apart to avoid leakage in source reconstruction [57].”. However, it is hard to validate this claim, and I consider it rather unlikely that different sources in the visual system are sufficiently far apart (if that is even possible). Such assessment would also depend on certain parameter choices of the inverse model, which are currently not provided.
I strongly urge the authors to provide some theoretical and empirical evidence for the behavior of their methods in mixed signal or mixed signal settings. Specifically suggest to study the null case of only one source with independent noise: X = Z + E1 ; Y = Z + C*E2 with coefficient parameter C and independent noise E1 and E2. As well as the case of two interacting signals Z1 and Z2 with FIT(Z1 -> Z2) > 0, where instead of Z1/2 mixed signals X = Z1 + A*Z2 and Y = B*Z1 + B. I suggest to study this case as a function of the parameters A and B with |A| < 1 and |B| < 1.
Language-wise I found it slightly odd to put some parts of the methods description into past tense (lines 80ff for example). Also some sentences seem to be rather long, see for example line 335ff.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: - How were the information theoretic quantities calculated? Was this done by constructing histograms? What were the dimensions of the different distributions? How many bins were used and can the authors comment whether the amount of available data was sufficient to estimate these quantities? Or were some kinds of parametric distributions fitted and the information theoretic measures were evaluated analytically?
- What were the parameters of the source reconstruction in the MEG/EEG examples? Which inverse methods were used, what regularization parameters? How were the forward models calculated? Was source reconstruction done with free orientation dipolar sources or orientation fixed perpendicular to the cortex? How many source dipoles were modeled? How were sources aggregated into regional time series. How many time series per region? What was the dimensionality of the data in each step of the processing chain?
- Line 318: the study [4] seems to be relevant but is almost not discussed. Why is it so important that the metrics “should be upper bounded by either feature information encoded in the past of the sending region or the total information flowing between regions.” What is the practical disadvantage if this is not the case?
- What is the relevance of the theoretical properties of FIT shown in the supplement ( in sections SM1.3.4ff)? What would be the practical implications if some of these properties would not hold?
- P195ff: What are the references for these statements (e.g. “area V3A (carrying maximal stimulus information in the dorsal stream visual cortex)”) ?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 4 excellent
Limitations: Despite the comprehensive theoretical content the paper is lacking some technical details which should be provided in a revision.
The main limitation for me is the unclear behavior of the proposed methods in the presence of mixed signals. While the paper is overall of high quality, I see a necessity to reduce my score if the issue is not adequately addressed/discussed in a revision.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Asymmetric source mixing.
We simulated source mixing in different proportions in X and Y due to field spread/ common reference. Such source mixing in real cases is (and is assumed to be in our simulations) instantaneous (i.e. zero lag) and with a stable (across time) proportion of source sharing in X,Y.
We simulated a source Z (informative about a stimulus feature S) shared between X and Y with a different proportion A:
X= Z(s) + Ex ; Y=A*Z(s)+ Ey
with Ex, Ey indep. Gauss. noise. This is the “null model” proposed by the Referee, but we controlled the SNR of X,Y by changing A (which sets the relative level of stimulus signal in X,Y) and fixing noise SD to 1. As predicted, on this model FIT and TE had spurious positive values (Fig R1B). In the submitted paper, we used a null-hypothesis test (randomly permuting trials with the same stimulus feature value) for spurious values induced by X,Y covariations due to stimulus-signal sharing. Here, this test correctly rules out that the from the null model’s FIT and TE are generated only by source sharing with no real transmission (Fig R1B). We already reported in the submitted paper that the real-data FIT were statistically significant with this test.
Importantly, analysis of the null model also shows that with instantaneous source mixing the ratio between stimulus info in X and Y is constant in time (Fig R1A). This gives a useful heuristic: different timecourses of stimulus info in X vs Y cannot be explained by instantaneous source mixing. We measured all real-data FIT in cases with a delay in stimulus info latencies between X and Y (X to Y info latencies: MEG: 17-35ms between V1 and higher areas, Fig R2C. EEG: 25ms across hemispheres (Fig 4B). In spike data: 20ms from thalamus to cortex, Fig S8). Overall, these findings speaks against dominant mixing of a stimulus-informative source in our analyses.
Finally, as suggested we simulated the case with real FIT between two “pure” signals Z1 and Z2 that are unevenly mixed in the measured X,Y:
X = Z1 + AZ2 + Ex; Y = Z2 + BZ1 + Ey
Since adding a new stimulus-informative channel (Z1 to Y and Z2 to X) increases the stimulus information in X,Y, we set SD of indep. Gauss. Noise Ex,Ey to equalize SNR of X and Y across the simulated parameters space. We found (Fig R1C) that mixing (A,B>0) reduced FIT and TE compared to the pure case (A=B=0). However, the correct directionality of info transfer was always detected for all mixtures. Thus, FIT is reasonably conservative and robust to this mixing.
Info computation details.
We will bring into the main text more theory details about PID and FIT. See General and Ref. LrX8 rebuttal for our plan.
Information was computed by discretizing neural activity into equipopulated bins (3 bins for simulations, see ll 137-8; 2-4 bins for real data, SM p. 21-24). These discretized estimators are widely used in neuroscience and PID was developed mostly for discrete distributions. Apologies if this was unclear. We will add to SM a dedicated Methods section and a Table with the number of bins used for each figure panel.
We now show in a new simulation that the number of trials we used in simulations and real data analyses is enough for good estimates (see Fig R4 and rebuttal to Ref. TXME).
Importance that FIT satisfies proven info-theoretic bounds.
That FIT satisfies such bounds is essential to interpret FIT values as transmitted information. If FIT can be larger than the feature information encoded by sender X or receiver Y or than the total information transmitted (TE X→Y), then FIT cannot be interpreted as stimulus feature info transmitted X→Y. If e.g. FIT>I(S;X_past) then some of the stimulus feature info in FIT cannot have been transmitted X→Y. These bound also allow quantifying FIT in meaningful relative terms. They allow e.g. to interpret the ratio FIT(X→Y)/I(S;X_past) as the proportion of stimulus info encoded in past of X that gets transmitted to Y. This will be explained in the revised paper.
Source mixing in MEG/EEG data.
We used published MEG/EEG data and we kept to the published preprocessing, to avoid introducing changes which confound comparison of FIT with published work. We referred to the original papers for full info, but we provide below (and will add to the SM) some details.
For MEG:
Source reconstruction: LCMV beamformers based on: (i) leadfield matrices from 3-layer boundary element head-model (conductivity 0.3, 0.3, 0.006 S/m for scalp, brain, skull) based on individual MRIs; (ii) covariance matrix (CM) of broadband data (275x275 sensors); (iii) regularization of 5% of CM. Source space constrained to cortical sheet with 4096 vertices per hemisphere, and source orientations chosen to maximize power at each vertex. To illustrate spatial resolution, we plot correlation between LCMV spatial filters of neighboring sources vs distance, finding <0.02 correlations at d>2.5cm (Fig R2B). To compute FIT & TE, time-frequency repr. of sensor data was projected into source space and averaged over vertices within the ROI (80,20,10 vert. for V1, V3A, LO3). Although our FIT/TE analyses were all correct, we identified an error in rendering ROIs in Fig 3B. Apologies. The correct visualization is in Fig R2A. We also apologize for the over-statement about no leakage, which we will rectify. Leakage of MEG source estimates drops greatly at >2cm for realistic SNR (Gross et al, Neuroimage 2003). We thus ran a more conservative analysis only on V1, V3A, LO3, visual ROIs with high stimulus info (Fig R2C) that are >2.8cm apart. Results (stronger feedforward stimulus transmission) are fully confirmed in this network (Fig R2D-F). Since here real-data analyses serve more for validation than for discovery, we would present the more conservative analysis in the revised paper. We appreciate feedback on this.
For EEG:
We used sensor data. Distance between electrodes LOT, ROT for computing inter-hemispheric transfer was >10 cm, suggesting that source mixing may be small.
---
Rebuttal Comment 1.1:
Title: Thanks for the clarifications although I am not convinced about some
Comment: I would like to thank the authors for the thorough rebuttal. I am mostly satisfied and I would suggest that specifically the technical details of the M/EEG data analyses be added to the manuscript.
There are two major points that still do not convince me.
1. The authors use a null-hypothesis test that randomly permutes trials with the same stimulus feature value that is supposed to test for spurious values induced by X,Y covariations due to stimulus-signal sharing. They report that this test correctly rules out "that the from the null model’s FIT and TE are generated only by source sharing with no real transmission". However, I am not convinced that this test can separate interactions between X and Y that are just due to instantaneous mixing from genuine time-delayed interactions between X and Y (taking the issue of stimulus dependence aside). By permuting trials of X and Y, all statistical associations between X and Y are destroyed. This includes those introduced by instantaneous mixing of otherwise independent sources. The test thus seems too liberal, meaning it is too easy to reject the null hypothesis also for mixed independent signals. Or, in other words, the null hypothesis (X and Y independent) is too unrealistic. A more realistic hypothesis would be that X and Y are in fact mixtures of independent signals (see [1]). I agree that precautions of the authors (selecting mutually far away regions) could be the reasons for the results to come out as expected. But I would like to clarify the role of the statistical approach.
2. The authors state "Importantly, analysis of the null model also shows that with instantaneous source mixing the ratio between stimulus info in X and Y is constant in time" . I am surprised by this result as I believe that the stimulus information in X and Y depends on the SNR which could be highly volatile. EEG and MEG measure macroscopic brain signals composed of many individual components (such as different rhythms), and the amplitudes of these components are strongly fluctuating over time and with task. Assuming that some of the activities are informative with rescect either to each other or the stimulus, and some don't, the SNR is constantly changing. An example would be if someone closes there eyes or moves their leg for a second. This would lead to substantial changes in the power of brain rhythms in the alpha and mu band. And even if these actions would be unrelated to the stimulus, this would alter the SNR of stimulus related information in the data in the same way as a change in the mixing coefficients would.
[1] Shahbazi, F., Ewald, A., Ziehe, A., & Nolte, G. (2010). Constructing surrogate data to control for artifacts of volume conduction for functional connectivity measures. In 17th International Conference on Biomagnetism Advances in Biomagnetism–Biomag2010: March 28–April 1, 2010 Dubrovnik, Croatia (pp. 207-210). Springer Berlin Heidelberg.
---
Reply to Comment 1.1.1:
Comment: We are glad that the Reviewer is mostly satisfied by our additional analyses.
As stated in Rebuttal, we will add the details of M/EEG analyses to the paper.
Thanks for the additional suggestions.
(1) RE: “by permuting trials of X and Y, all statistical associations between X and Y are destroyed, including those introduced by instantaneous mixing of otherwise independent sources”. Our permutations do not shuffle trials and/or time points indiscriminately. The permutations create surrogate data associating the entire time series of X within a given trial to the entire time series of Y within another randomly chosen trial to the same stimulus feature value. They generate surrogate data that fully preserve auto-correlation in X and Y due to stimulus-info time variations and to autocorrelated noise, stimulus info in X and Y at each time point, and inst. correlations due to similarity of stimulus tuning of instantaneously mixed stimulus informative sources. What they destroy (besides the genuine time-lagged communication between X and Y) is the inst. noise correlations due to inst. mixing of non-stimulus informative sources. Having verified that the value of FIT (see below) is unaffected by the strength of inst. noise correlation between X and Y (because FIT’s PID discards any part of X,Y info that is not about the stimulus feature), the permutation surrogate provides a reasonable initial test of genuine time-lagged communication in the presence of autocorr. in X and Y, stimulus info in X and Y, and inst. correlations due to source mixing (inst. corr. due to stimulus signal mixing are maintained, inst. corr. due to noise mixing are destroyed but do not affect FIT). Apologies if this was not clear. We will clarify it in revision.
We showed in Rebuttal (Fig R1B) that the surrogate permutation is good for the null model of one informative mixed source.
Following the Reviewer’s new suggestion, we now simulated a new null model in which X and Y are linear mixtures of the same sources with possibly unequal mixing weights between X and Y. Assuming that each source has a stimulus-driven component (which is 0 for non-stimulus informative sources) plus own indep. additive noise, this can be compactly written as
X=F(s) + Nx +Ex
Y=G(s) + Ny +Ey
where F and G are the stimulus tuning functions of X and Y (sum of the stimulus-driven components of all stim-informative sources), Nx and NY are source-mixing noise of X and Y (sum of the stimulus-unrelated components of all sources), and Ex, Ey are indep. noise. Nx, Ny can be instantaneously correlated due to source mixing. We varied across simulations the level of independence between sources (from partial to full) . Making the sources more independent from each other and/or making source weighing more different between X and Y decrease the inst. noise correlation between Nx and Ny, and also makes the functions F and G more dissimilar.
Simulating this system, we found that (as explained above) FIT does NOT depend on the strength of inst. noise correlation between X and Y (unlike TE, which does), and that the spurious values of FIT are reduced when F and G are more dissimilar (because there is less redundant stimulus info between past of X and present of Y). Importantly, because FIT does not depend on the strength of the inst. noise correlation (destroyed in the permutation surrogates), for all tested simulations (including those with major mixing strength diff. between X and Y), the permutation test correctly classified these spurious FIT values as non-significant. Thus, the permutation surrogates can reasonably cover this new case.
We think it is useful to offer readers a simple surrogate permutation test for significance assessment. However, we agree that no surrogate data is perfect. In the revised paper, we will add the above simulations and clarifications, and carefully avoid overstatements. We will highlight the assumptions and limitations of the simple surrogate permutation, and we will cite [1] as a way to construct more conservative surrogates. We hope that this effort will contribute to promote more frequent and careful use of surrogates. Such use is currently uncommon in neuroscience and there is no single commonly accepted surrogate generation.
(2) RE: We agree that stimulus info timecourse difference in X and Y does not rule out genuine stimulus communication when noise in X and Y is not stationary it was stationary in our Rebuttal’s null model). We will clarify this in revision. Note that we think that examining differences in info timecourses between sites can be used as sanity check to flag possibly spurious FIT values, but NOT to prove/disprove real communication. A time-invariant ratio of stimulus info in X and Y would make it difficult to trust a positive value of FIT as real (it would require contrived scenarios, e.g. level of noise in one site proportional at each instant to the amount of stimulus info transmitted across sites).
---
Rebuttal 2:
Comment: Dear Reviewer,
Thanks again for your insights and suggestions, which we greatly appreciated. We wonder whether you received our rebuttal containing the results of the two requested simulations on source mixing, the reanalysis of MEG data with more conservative inter-ROI distances, the details and evaluation of spatial resolution of the MEG beamformer, the requested numerical investigation of the behavior of the FIT measure with number of trials and dimensionality of the neural response space, as well as other clarifications and extra analyses in response to suggestions of other reviewers. The week in which we can interact is getting to a close and we would really appreciate the opportunity to receive your feedback on our work. Thanks so much! | Summary: The submission proposes a measure called Feature-specific Information Transfer (FIT) which can be used to partial out information transmitted from a sender to a receiver (in this case, two brain regions) about a specific feature (another variable) in a casual way (i.e. it is not present in the history of the receiver. It does so using Partial Information Decomposition (PID), a new way of decomposing mutual information into shared, unique and synergistic components. Basic properties of the measure are established and validated theoretically and by simulation, and it is then applied to analyze three neural datasets of varying modalities, yielding both some recovered sanity checks and novel analysis results.
Strengths: The paper leads with a very strong exposition, which explains exactly what it is doing and why. The overall reasoning about the desiderata of FIT are likewise clear and come at the right pacing in the paper. The simulation and experimental results are comprehensive and show strong benefits over the prior TE approach.
Weaknesses: I think that the paper shunts too much important content to the appendix. PID and related concepts are likely to be less familiar to the NeurIPS audience, and the paper does not give much intuition on why various claims hold beyond a reference to the extensive supplement. Given the audience, I suspect that pushing more detail on the neuroscience data and experiments to the supplement in favor of bringing more theoretical meat to the main text would make for a stronger paper. Even if much of the detail remains in the supplement, papers such as this can benefit from providing at least intuitions or sketches as to why some properties might hold.
I also found minor typos:
* l40 features-specific -> feature-specific?
* l102 what is $Y_t$? Is it intended to be $Y_{pres}$, or am I misunderstanding something here?
Technical Quality: 4 excellent
Clarity: 3 good
Questions for Authors: Can you explain in more detail why $SUI(S: \{X_{past}, Y_{pres}\setminus Y_{past}\})$ can be higher than TE, motivating the definition in expr. 2? The paper makes the claim around line 93 and makes a reference to the bottom of Fig1A. I imagine it is true algebraically if one works it out but I'm not sure I see where in the figure this is explained or why it should be true intuitively.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 3 good
Contribution: 3 good
Limitations: The paper does a good job to discuss the computational limitations of actually computing FIT and how it limits the applicability to small numbers of regions and time points.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: The paper shunts too much important content to the appendix.
We agree that it would be better to bring to the main text more theoretical details about the PID used to compute FIT. In case of acceptance, we will use the extra page allowed by NeurIPS to include more PID theoretical details about FIT derivation and properties. This will include spelling out the definition of shared information, and moving to main text the Figure with the PID lattices, currently Fig S1 (useful to provide immediate intuition of why the upper bound properties of FIT hold).
If needed to free up sufficient space, as suggested by the reviewer we will push more details about the neuroscience data analysis validation on the SI (e.g. moving current Fig 4 to the SM).
Minor typos.
“l40 features-specific -> feature-specific?.” Thanks, we will correct it.
“l102 what is $Y_{t}$? Is it intended to be $Y_{pres}$?”. This was a misprint. We intended to write $Y_{pres}$, as the Reviewer correctly guessed. We will correct the misprint in revision.
Why $SUI(S:X_{past},Y_{pres} \backslash Y_{past})$ can be higher than TE, motivating the definition in expr. 2.
We agree that the reference to the bottom of Fig 1A contained in line 93 of the first submission was unclear. In the revision we will remove this and we will reference to Fig S1 (which will be moved to the main text, as mentioned in the reply above).
The fact that the first PID atom $SUI(S:X_{past},Y_{pres} \backslash Y_{past})$ can be larger than TE was derived formally from the algebra of Eq. S7. In the PID decomposition by construction the unique information is conceptually defined to be a component of (and hence equal or smaller than) the conditional mutual information about the target. However, the target in $SUI(S:X_{past},Y_{pres} \backslash Y_{past})$ is the stimulus feature S, which means that this atom is not constrained to be smaller than the Transfer Entropy $I(X_{past}; Y_{pres}| Y_{past})$, which is not defined in terms of S. Fig S1C illustrates the absence of a relationship between the PID atom $SUI(S:X_{past},Y_{pres} \backslash Y_{past})$ and TE, in which the yellow quantity TE cannot be mapped on the left lattice having S as a target. This was briefly explained in the SM after eq. S23 of the original submission and will be emphasized more in the revised paper, in case of acceptance.
The issue of redundancy about a target potentially exceeding the information between the sources is discussed in the PID literature (Harder et al, Phys Rev E 2013; Pica et al, Entropy 2017). The intuition of why this can happen is because the mechanisms connecting the target with the sources can induce a non-zero component of redundancy (known as mechanistic redundancy), independently of the correlations between the sources. That is, redundancy is created due to the similarity in the mechanisms linking the target and the sources, even for independent sources. Pica et al (Entropy 2017) developed a formalism relating PID atoms with different targets to quantify mechanistic redundancy and isolate the component of redundancy that is already manifested in the mutual information between the sources (termed source redundancy). We built on this formalism to guarantee that FIT only considers a component of source redundancy about S carried by $X_{past}$ and $Y_{pres}$ that is unique with respect to $Y_{past}$.
---
Rebuttal 2:
Comment: Dear Reviewer, Thanks again for your insights and suggestions, which we greatly appreciated. We wonder whether you received our rebuttal containing the clarifications you requested and the response to your suggestions for improving clarity, as well as other clarifications and extra analyses in response to suggestions of other reviewers. The week in which we can interact is getting to a close and we would really appreciate the opportunity to receive your feedback on our work. Thanks so much!
---
Rebuttal Comment 2.1:
Title: Thank you
Comment: Dear Authors -- thank you for the detailed rebuttal and clarifications. I maintain my favorable rating and hope to see the paper at the conference.
---
Reply to Comment 2.1.1:
Comment: Dear Reviewer, thanks again for your valuable insights, which helped us to improve the paper significantly. Thanks also for your words of appreciation for our work. We are grateful for your support. | Rebuttal 1:
Rebuttal: We are grateful to the Reviewers for their suggestions and insights. We feel that the new results we obtained in addressing their suggestions (Figs R1-4) significantly elevate the level of conceptual advance provided by our paper. These new results will be included in the revised paper in case of acceptance.
The advances from the new work, and the changes we wish to make to the paper, are summarized below. More details can be found in the replies to the individual Reviewers.
Methods development and validation:
Mixing of sources.
We developed new simulations to evaluate the effects of asymmetric mixing of sources (which can happen in real neural data because e.g. of field spread). First, we developed a simulation of the effect of mixing a stimulus-informative source with unequal proportions in the empirically measured signals X,Y. We found that in such case, as predicted, both FIT and TE can have spurious positive values (Fig R1B). However, we also found that the permutation test that we developed and used for FIT in the submitted paper (shuffling trials with the same value of the stimulus feature) to discount the effect of common covariations in X, Y due to stimulus-signal sharing can also be used to rule out this confounder of mixing a stimulus-informative source (Fig R1B). Importantly, all previously reported real-data and FIT values were already tested against this null-hypothesis, suggesting that our FIT results cannot be explained by this artefact. Our analysis also shows that with instantaneous source mixing the ratio between stimulus information in X and Y is constant in time (Fig R1A). This provides an important heuristic to accept empirical results: data with different time course of stimulus info in X vs Y cannot be explained by instantaneous source mixing. Importantly, all the FIT real-data results in the paper are obtained in presence of a latency difference of >10ms between encoding of stimulus information in the putative sender and putative receiver (e.g. Fig R1B), incompatible with dominant mixing of a stimulus-informative source.
Finally, we simulated the case in which there is real FIT between two “pure” signals Z1 and Z2, but Z1,Z2 are unevenly mixed in the measured X,Y. We found (Fig R1C) that mixing reduced FIT and TE compared to the pure case. However, the correct directionality of info transfer was always detected for all mixtures. Thus, FIT/TE seems conservative and robust to this mixing. To our knowledge, this case was not considered in previous GC/TE literature, so this extra analysis provides progress beyond FIT relevant for the TE literature.
More conservative MEG analysis.
Models of source separation indicate that in MEG mixing becomes negligible at distances > 2cm. We thus run a more conservative MEG analysis in a network of visual areas that are > 2.8cm apart, reducing possible source mixing. All previously reported FIT results are fully confirmed in this more conservative analysis (Fig R2C-F).
Scaling of FIT with data size and dimensionality.
We computed how FIT scales with the size of the dataset and the dimensionality of neural activity, and compared it with the behavior of Shannon information quantities such as TE. We found that FIT behaves much better than TE (smaller limited-sampling bias, less data needed for the accurate estimations, more robust to the curse of dimensionality), see Fig R4A. This is because FIT does not include synergy terms which, according to previous literature, are the most biased and data-hungry ones. We also now implemented in the FIT numerical routine the limited-sampling bias corrections of the neural information theory literature, which further help.
Comparison with DFI on real data.
We now computed DFI, a previously proposed measure of feature-specific information transmission, also on real data. We found (Fig R3) that the problems with DFI predicted in the previous version of our paper by mathematics and simulations are also found in the neural datasets. On real data, DFI was very often negative and it didn’t detect directionality or feature specificity in cases in which we expect from previous literature that specificity or directionality should exist.
Study information about multiple features transferred simultaneously.
In the submitted paper we already showed on real data that FIT can reveal encoding of multiple features. Simultaneous encoding of multiple stimulus features was investigated in all three datasets (MEG, EEG, spikes) and was found in two datasets (MEG, EEG). However, in the submitted paper we did not demonstrated this property with simulations. We now performed a simulated study of encoding and transmission of multiple features (Fig R4B). We simulated two independent stimulus features simultaneously encoded and transmitted (one more and one less strongly). We found that FIT identifies correctly that both features are encoded and transmitted, and ranks correctly the features about which most information is transmitted. These simulations will be added to the SM of the revised paper.
Writing and presentation:
We will bring more details of the FIT maths (shared and unique information definition, PID lattices, mathematical properties of FIT) from the SI to the main text. We will use the extra page allowed for the FIT camera-ready paper. If space is a problem, we will move the EEG analysis figure to the SM.
We will be clearer on how we computed the information quantities (by discretizing data, computing histograms). This information was present yet not prominent or well organized in the submitted paper.
We all add a SM section which explains this clearly and which contains a Table with the number of bins used for each (simulated or real) dataset and Figure panel. We will add to SM3.1 a description of the requested MEG source localization (summarized from the original data publication which reports full details).
We will better define what we mean by “feature”.
Pdf: /pdf/ec4e6e1196e02d1c22bb2ddc3f18e398b66abb4c.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Team-PSRO for Learning Approximate TMECor in Large Team Games via Cooperative Reinforcement Learning | Accept (poster) | Summary: This work addresses the problem of solving large-scale zero-sum two-team games. To solve this they explore extensions to the Double Oracle and Policy-Space Response Oracle algorithms that solve for a team-based equilibrium concept called TMECor. This is a straightforward extension of both of these algorithms that modifies the equilibrium concept and how policies are added to the empirical game. They evaluate their methods on the four-player variants of Kuhn Poker, Liar's Dice, and Google Football.
Strengths: - Background clearly explains all of the necessary elements required.
- The subscripting used in notation throughout that uses both color and unique font (colorblind-friendly) helps keep keep concepts very clear and organized.
- This paper is honest and humble with how much of their work is building upon preexisting work.
- Include discussions of their results on both algorithmic iterations and walltime.
Weaknesses: - Related work contains a lot of very tangential references. For example, the various PSRO extensions. Later in the paper a variety of significantly more related work is discussed, it would make sense for this to be in the related work section instead.
- L232: This reads like the authors are claiming using DRL as a BR oracle is a contribution of this work, when it is one of PSRO.
- L238-244: This is pretty challenging to read and is a single sentence, strongly recommend editing into individual statements.
- In the PSRO sections, the empirical game is treated as a restricted game. This is misleading and incorrect, because the payoffs are estimates.
- No baselines are considered in DO variants of their algorithms.
- Limited evaluation of the PSRO variants of the algorithms.
- Missing approximate exploitability curves of PSRO algorithms. Consider also performing a combined-game analysis.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: - L33: Why are you considering TMECor and what other alternatives did you consider and why not them?
- Could you please cite and define TMECor and its abbreviation?
- L56: The stratego paper explicitly says they use self-play. Could you please provide a citation that the in fact are using Pipeline PSRO?
- L123: I thought team games defined the players sharing payoffs. Why are they being separated here?
- L247: What is the "variant" of the algorithm? What specific changes are being made that are novel to this work and why are they necessary? If applicable, can you show gains from these changes?
- This work could be enhanced by including baseline method(s). For example, natural one is to consider that each teamplayer plays the same policy, where the policy parameters are a correlation device and their private information realizes different behaviors. Another option is to treat each team as a single agent and then an algorithm like QMIX to factor out per-player behavior. Without something akin to these baselines, it's hard to understand what benefit of the team-like treatment applied here.
- L292-293: It's not clear to me that this is true. Surely, if the first 1-n BR are exact and the last, n, iteration is an epsilon BR, then the final result will be an approx TMECor. However, the compounding errors of having epsilon BRs at every iteration makes it hard to say anything about convergence. Or are you trying to make a super weak claim about how approximate the TMECor is? If so, I suggest being more clear about this.
- L305: If the teammates are sharing policy parameters what is really the difference between Team-PSRO and Team-PSRO-MM? The strategy sets are the same?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: Adequately discusses limitations with team double oracle (L220-225), and generally (L356-362).
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We would like to thank the reviewer for the detailed review and valuable feedback on our work. These insights have greatly helped us in identifying areas of improvement. We would like to address the concerns as follows:
**Strengths**
We're pleased that the reviewer appreciated our clear background explanation, unique notation, honest portrayal of our contributions, and the inclusion of discussions on algorithmic iterations and walltime.
**Weaknesses**
Related Work Organization: We agree that some references in the related work section could be reorganized. We will revisit this section and ensure that the most relevant works are appropriately placed and discussed.
Lines 232, 238-244: We understand your concerns and will clarify our contributions and rephrase the complex sentence to enhance readability.
Treatment of the Empirical Game: Your point about our phrasing of the empirical game as a restricted game is well-taken, and we will edit this description to avoid any confusion.
Baselines in DO Variants: Thank you for this great suggestion. We have included additional experiments against fictitious team play [FCGS18]. As shown in the figure in our rebuttal pdf, our methods outperform fictitious team play. We believe that these additional experiments greatly improve the paper.
Evaluation of PSRO Variants: We will work on providing more comprehensive evaluation details for the PSRO variants.
Approximate Exploitability Curves: Our decision not to compute exploitability in Google Research Football (GRF) aligns with previous research, given the substantial computational resources required and the potential for high inaccuracy due to the environment's complexity and randomness.
**Questions**
Consideration of TMECor: When it comes to adversarial team games in which the players cannot secretly communicate, there are two main notions of equilibrium that are considered: TME and TMECor. TME corresponds to having the team members play independently, while TMECor allows for correlation between the team members. We believe there are several reasons why TMECor is the superior solution concept for the setting we consider. First, TMECor always guarantees a higher value for the team. Second, correlation between team members can be achieved easily, for example by communicating before the game starts, as in bridge. Finally, TMECor is known to be learnable, while TME is significantly harder, both in theory and in practice (for example [ZFS23] shows that TME is in a harder complexity class than TMECor). We discuss this in lines 164-170 but will include expanded discussion. We define TMECor in lines 128-132 (we will add a citation to the original paper too in the final version).
Stratego Paper and Pipeline PSRO: Sorry for the confusion. The Pipeline PSRO paper achieves state-of-the-art performance in Barrage Stratego. We will edit the paper to reflect this fact.
Team Games and Payoffs: Although different players can achieve different utility, the team utility is the sum of both players, so solving for TMECor can be viewed as a setting where both players have the same utility (the team utility). We will add discussion to make this more obvious.
Novelty of Algorithm Variants: We will clarify the specific novel changes in the algorithm variant and demonstrate the necessity and gains from these changes.
Inclusion of Baseline Methods: Regarding the suggestion of having each team player play the same policy: in fact, that is exactly what we do with MAPPO. Each team player has the same weights. But we don’t just have one policy, we have multiple in a population. We already include a self-play variant where there is just one policy, which I think is what is being referenced. As to including QMIX ablations, we believe that this is not the interesting aspect of our approach. There are multiple existing methods for cooperative deep RL, including MAPPO, QMIX, and many others. We run experiments with MAPPO because it has been shown to be the best-performing algorithm, but perhaps different cooperative RL algorithms would work better in different domains. However, we leave this ablation study to future work, as the main contribution of our paper is showing that cooperative RL can be combined with a PSRO-based method to find approximate TMECor.
Convergence: Actually we still have convergence to epsilon-TMECor if all best responses are epsilon-BRs. To see this, at convergence, we have that the best response cannot exploit the opponent meta-Nash more than epsilon over the existing meta-Nash for both teams, so by definition the meta-Nash is an epsilon-TMECor. We will make this point more clear in the paper and can add a brief proposition and proof if the reviewer would like.
L305: The team members share parameters in the joint BR. So every strategy in Team PSRO will share parameters. But in Team PSRO-MM, the policy for each player can come from different iterations, which have different parameters. We will make sure this is clear in the revised version.
**Conclusion**
We acknowledge your concerns and agree that our paper requires improvements. We believe that the planned revisions, as addressed above, will significantly enhance the quality of our work and respond to your critique. We hope that these clarifications and commitments to improvements might lead you to reconsider the rating.
[FCGS18] Gabriele Farina, Andrea Celli, Nicola Gatti, and Tuomas Sandholm. "Ex ante coordination and collusion in zero-sum multi-player extensive-form games." In: Advances in Neural Information Processing Systems (NeurIPS), 2018.
[ZFS23] Brian Hu Zhang, Gabriele Farina, and Tuomas Sandholm. "Team Belief DAG: A Concise Representation for Team-Correlated Game-Theoretic Decision Making.". In: International Conference In Machine Learning (ICML), 2023.
[CG18] Celli, Andrea, and Nicola Gatti. "Computational results for extensive-form adversarial team games." In: AAAI Conference on Artificial Intelligence (AAAI), 2018.
---
Rebuttal Comment 1.1:
Comment: Thank you for answering my questions and addressing the errors within the manuscript.
To follow-up on the existing points of discussion on my review:
- Thank you for including the baseline, this definitely helps contextualize the results. I am still surprised to not see a treatment of DO using some form of team reduction. This would be a much closer comparison but is not a reason to block this work.
- I do think including additional ablations/comparisons for PSRO will better help the readers to better understand this work. However, contrary to my fellow reviewer MAu9, I disagree that it is unreasonable to request methods that "can be easily extended ... with some modification", because the modifications themselves warrant independent studies to verify their correctness.
- I would suggest adding the discussion about TME and TMECor within the final version of the manuscript. NeurIPS is not primarily a game theory audience so I think this would be very welcome.
- In regards to the baseline methods, I understand the MAPPO and self-play methods. My point was more to better understand what benefits were empirically gained for explicitly training these agents as a "team". I would argue that the viewing of common-interest agents as analogous to the decentralized control of a single agent is the most immediate and direct comparison. Therefore, by using it as a baseline, we better understand what is of import with treating them as individuals instead of as a collective single entity. In my opinion, this result, which is unreasonable to ask for in this review period, would be the most convincing piece of evidence to me in support of your claim.
I still have some reservations about this work and whether the team aspect of it is any different than a decentralized single-agent problem, and some of the experimental things we've discussed. However, my fellow reviewers Gu1R and kqkC are happy with it, and I don't believe MAu9 raised any concerns necessitating the rejection of this work, so I am willing to change my score to a borderline/weak accept. However, I would neither fight for the acceptance nor rejection of this work.
---
Reply to Comment 1.1.1:
Title: Author Response
Comment: We sincerely appreciate your thoughtful follow-up and the time you've invested in evaluating our work. Your insights have been instrumental in identifying areas for improvement, and we are committed to addressing them in the final version of the manuscript. Below, we respond to your specific comments:
**Baseline for DO Using Team Reduction:** What exactly do you mean by team-reduction? If you mean trying QMIX as an ablation, we would expect Team PSRO with QMIX to underperform Team PSRO with MAPPO since both QMIX and MAPPO are cooperative RL algorithms but MAPPO empirically performs better across a wide range of environments. We will certainly consider this aspect in future work and appreciate your understanding that it's not a reason to block the current submission.
**Additional Ablations/Comparisons for PSRO:** We have added a novel algorithm, called Team PSRO-MM Top-K, where we select the top k strongest opponents (in this case k equals 4) after every iteration of evaluation and use these policies to get a mixed policy to add to the population. We have done additional experiments to compare our method with other baseline methods in GRF, including a deep RL version of fictitious team play [FCGS18] and PFSP [AlphaStar]. **We have included these experiments in the rebuttal pdf and show that the PSRO-MM Top-k outperforms all baselines, including the added baselines of fictitious team play and PFSP.**
**Discussion about TME and TMECor:** Your suggestion to include a more detailed discussion about TME and TMECor is well-received. We recognize that NeurIPS has a diverse audience, and we will ensure that our final manuscript includes a comprehensive explanation that caters to readers with varying backgrounds in game theory.
**Understanding the Benefits of Treating Agents as a Team:** We understand your interest in exploring the empirical gains of training agents as a team versus treating them as a decentralized single-agent problem. Since finding a joint best response is naturally a cooperative RL problem, we anticipate independent RL to perform worse than MAPPO, as has consistently been demonstrated across a wide range of cooperative RL environments. We will consider this direction in future research and appreciate your understanding of the limitations within the current review period.
In conclusion, we have shown that finding TMECor in large games can be reduced to a cooperative RL problem. **While ablations on the type of cooperative RL algorithm used (MAPPO vs. QMIX vs. independent RL) are potentially interesting, the main point of our paper is that one can use whichever cooperative RL algorithm they choose.** We chose MAPPO due to its demonstrated empirical superiority compared to other algorithms, but one could choose a different cooperative RL algorithm and we would still expect our method to work. We have also included a new algorithm and new baselines for our deep RL experiments.
We are grateful for your willingness to change your score to a borderline/weak accept. We understand that you have reservations, and we are committed to addressing them to the best of our ability in the final manuscript. Your feedback has been invaluable, and we believe that the planned revisions will enhance the quality and impact of our work. Thank you once again for your constructive feedback and consideration. | Summary: This paper proposes two algorithms, “Team PSRO” and “Team PSRO Mix-and-Match” for zero-sum two-team games. Team-PSRO is guaranteed to converge to a TMECor. The algorithms extend PSRO to zero-sum two-team games. Team-PSRO Mix-and-Match is an improved version of Team-PSRO with better population policies. The experimental results show the convergence of Team DO in Kuhn poker and Liar’s dice, and Team PSRO beats self-play in the Google Research Football environment.
Strengths: - Extend PSRO to “Team PSRO”, and propose “Mix-and-Match” Team PSRO.
- Team-PSRO is guaranteed to converge to a TMECor.
- The experimental results show the convergence of Team DO in Kuhn poker and Liar’s dice, and Team PSRO beats self-play in the Google Research Football environment.
Weaknesses: - The description about Team DO-MM and Team PSRO-MM is unclear. For example, the function P is not clearly described, so it becomes unconvincing for the success of the new method.
- It is unclear about how NE is derived exactly in TEAM-PESO when implementing it. It would be also helpful if an anonymous github is given or in the supplementary attachment.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: The citations are written in a very confusing way. I believe you use a wrong latex command.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: N.A.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their careful examination of our paper and their valuable comments. In response to their concerns, we provide the following explanations and planned corrections:
**Weaknesses**
Unclear Description about Team DO-MM and Team PSRO-MM: P is defined in line 262. We will move this definition to the notation section to make it clearer. We will also work on improving the text to better elucidate the concepts and methods involved in Team DO-MM and Team PSRO-MM.
Deriving NE: The paper indeed missed a detailed description of how NE (Nash Equilibrium) is computed in TEAM-PSRO. The short answer is that we solve it exactly using an LP. We will add the required details in the revised version, making sure it's clear and understandable.
Code Availability: Your suggestion about providing an anonymous GitHub link or supplementary attachment with the code is valuable. We will try to provide an anonymized repository for the camera-ready version.
**Questions**
Confusing Citations: We apologize for the confusion caused by the way citations are written. We suspect a technical issue in the LaTeX formatting. We'll correct this in the revision, ensuring that the citations follow a consistent and standard format.
**Conclusion**
We are committed to addressing all the points you have raised and believe that these revisions will substantially enhance the clarity and completeness of the paper. We hope that our response assures you of our determination to deliver a high-quality paper and that you might consider a higher rating. Thank you once again for your thoughtful review, and we look forward to your further feedback. | Summary: The paper presents “game-theoretic” reinforcement learning methods for playing zero-sum games (between teams of players). A theoretical claim (proof is in the supplementary material) is made about convergence of the base tabular methods to the TMEcor solution concept, and empirical results compare the methods to self play RL in the “Google Research Football” domain.
Strengths: 1. By and the large, the paper is presented very well. Concepts are explained well, the paper is very polished. The derivation and explanation of methods appear to be sound.
2. The topic seems important, with well-defined limitations
3. The results appear to show good (albeit incremental) performance, though see statements in “weaknesses” for clarification
4. The enhanced methods are simple
Overall, this seems to be a useful paper.
Weaknesses: 1. The results need better explanation and analysis, particularly surrounding the presentation of Figure 2. Not enough information is supplied. I have the following questions: (1) Were appropriate statistical tests run to evaluate statistical significance? If so, they should be reported. If not, claims of being better are unsubstantiated. (2) What do the error bars in Figure 2a-b represent? (3) What is Elo (Figure 2a)? (4) What is the built-in AI against which the algorithms were paired? Why only the comparisons when paired with built-in AI? What about results when self play RL and PSRO-MM were the opponents? (5) What about RL algorithms trained against a variety of opponents instead of just self play?
I think these questions are easily addressable, and understand the difficulty of fitting everything into a short conference paper. It remains to be seen whether the clarifications would favor the proposed techniques or not.
2. The paper has a couple of unsubstantiated claims that, in my opinion, need to be removed or altered:
A) The paper claims to “introduce *the first* game-theoretic techniques for two-team games” (emphasis added). The paper does not prove this to be true (it is doubtful this could be proven), and often such claims are not true. The paper later re-states the claim with the caveat of it being the first “to their knowledge,” which is better but still unsubstantiated. In general, the claim about being “the first” has little to no scientific value and causes the reader to focus on whether it is indeed “the first” rather than the contribution of the paper. I’d recommend the statements be removed altogether.
- Second, the paper claims (intro to section 4) that tabular methods will not scale to large games. That may or may not be true (seems like there are ways it could be done, and perhaps very well, if one looks at it from a different paradigm). Regardless, the paper does not back up the claim. I think the statement should be softened or at least the opinion be given better context.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: The results need better explanation and analysis, particularly surrounding the presentation of Figure 2. Not enough information is supplied. I have the following questions: (1) Were appropriate statistical tests run to evaluate statistical significance? If so, they should be reported. If not, claims of being better are unsubstantiated. (2) What do the error bars in Figure 2a-b represent? (3) What is Elo (Figure 2a)? (4) What is the built-in AI against which the algorithms were paired? Why only the comparisons when paired with the built-in AI? What about results when self play RL and PSRO-MM were the opponents? What about RL algorithms trained against a variety of opponents instead of just self play?
I'm potentially inclined to change my review based on the answers to these questions.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations: Yes, I think the paper does a good job with this.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We greatly appreciate the reviewer's time and detailed feedback. We aim to address all the concerns mentioned:
**Results Explanation and Analysis (Figure 2)**
Error Bars and Statistical Significance: We acknowledge that the information about statistical tests was not included in the main text. Indeed, we performed statistical tests for the comparisons, and the results were statistically significant. This will be clarified in the revised version. The training details and additional experiments can be found in the appendix. For Figure 2a-b, we made the agents trained by different algorithms play against built-in AI with different difficulties and compared the goal difference. To avoid the influence of randomness, we ran three seeds for each experiment, the error bars in Figure 2a-b means the standard deviation. This comparative approach has also been employed in PSRO w. RD and BD[1] as a major evaluation method.
Elo Explanation: Elo (Figure 2a) refers to the Elo rating system used to measure the relative skill levels of the algorithms. For the computation of Elo ratings, they was computed by making agents play against each other. Here is a detailed explanation of how they were computed:
- Assign an initial Elo rating (1000 in our setting) to each player.
- Determine the expected score of each player in a game. This is calculated as follows:
- For player A: expected score = 1 / (1 + 10^((B - A) / 400))
- For player B: expected score = 1 / (1 + 10^((A - B) / 400)) where A and B are the current Elo ratings of the two players.
- Play the game and determine the actual score of each player.
- Update the Elo ratings of the two players based on the outcome of the game and the expected scores:
- For player A: new rating = old rating + K * (actual score - expected score)
- For player B: new rating = old rating + K * (expected score - actual score) where K is a constant that determines the "weight" of the update (set as 10 in this case)
- Repeat the process for each game, using the updated Elo ratings from the previous game as the starting point for the next game.
We will add an explanation of these details in the revised version.
Explanation of Built-in AI and Comparison to other Methods: GRF incorporates built-in AI agent and allows for difficulty adjustments. Consequently, in GRF-related experiments, the built-in AI is frequently employed as a benchmark to evaluate the performance of trained agents, such as PSRO w.RD and BD[1], TiKick[2], etc. We use relative population performance to evaluate the performance of different populations. In the appendix, in Figure-4, by using the agent trained by Self-play as the benchmark, we also compare the performance of both Team PSRO and Team PSRO-MM as the training time steps increased.
New Self-Play Variant: While we haven’t seen the suggested approach of RL trained against a variety of opponents in the literature, and we don’t predict that such a method would have game-theoretic guarantees, this is an interesting suggestion for future work.
[1] Liu, X., Jia, H., Wen, Y., Hu, Y., Chen, Y., Fan, C., ... & Yang, Y. (2021). Towards unifying behavioral and response diversity for open-ended learning in zero-sum games. Advances in Neural Information Processing Systems, 34, 941-952.
[2] Huang, S., Chen, W., Zhang, L., Xu, S., Li, Z., Zhu, F., ... & Zhu, J. (2021). TiKick: towards playing multi-agent football full games from single-agent demonstrations. arXiv preprint arXiv:2110.04507.
**Unsubstantiated Claims**
First Scalable Game-Theoretic Techniques Claim: We do not claim to be the first game theoretic technique for two-team games. That is clearly not true based on the literature we cite in the paper. Instead, we claim to be the first **scalable** game-theoretic technique for two-team games. The current most scalable method for two-team games is Zhang et al. [ZFS23], which is a tabular method that clearly will not scale to google research football. We will soften the language a bit by claiming that *to our knowledge* we are the first scalable game-theoretic technique for two-team zero-sum games.
[ZFS23] Brian Hu Zhang, Gabriele Farina, and Tuomas Sandholm. "Team Belief DAG: A Concise Representation for Team-Correlated Game-Theoretic Decision Making.". In: International Conference In Machine Learning (ICML), 2023.
**Answers to Questions**
We believe that the answers provided above address the questions raised by the reviewer.
**Conclusion**
We value the constructive feedback provided, and we believe that with the planned revisions, the paper will be strengthened significantly. The questions and concerns raised are indeed addressable, and we are committed to making the necessary changes to clarify all aspects.
We hope that these explanations and our commitment to revise the paper accordingly will lead the reviewer to reconsider the rating. Thank you once again for the thoughtful review, and we look forward to your feedback on our responses.
---
Rebuttal Comment 1.1:
Comment: Thanks for the response. I'll keep my score as it stands.
Notes:
1. Without seeing the results of the statistical tests, it is difficult for me to comment on them, though good that they are "statistically significant."
2. I still think it would be better to not try to claim to be the first, even if claims are softened. The community learns not by racing to the finish line, but by understanding the issues. Subjective claims of being first divert our attention away from the issues.
---
Reply to Comment 1.1.1:
Title: Author Response
Comment: Thank you once again for your thoughtful comments and for maintaining your score. We appreciate your engagement with our work and your constructive feedback. We would like to respond to your notes:
Statistical Tests: We understand your concern about not being able to comment on the statistical tests without seeing the results. In the final version of the paper, we will include the details of the statistical tests, ensuring that they are transparent and accessible to readers. We believe this will provide the necessary context and validation for our claims.
Claim of Being the First: We acknowledge your perspective on the claim of being the first, even if softened. We agree that the focus should be on understanding the issues and contributing to the community's knowledge rather than racing to be the first. In light of your feedback, we will remove the claim altogether and concentrate on articulating the novelty and value of our approach without any comparison to the timing of other works.
We believe that these changes will align with your suggestions and further improve the quality of our paper. We are committed to making these revisions in the final manuscript.
Once again, we express our sincere gratitude for your time, effort, and valuable insights. Your feedback has been instrumental in guiding our revisions, and we look forward to incorporating your suggestions. | Summary: This work aims to find TMECor in two-team zero-sum games. They extend PSRO from two-player games to two-team games and proposed Team-PSRO which is guaranteed to converge to a TMECor. They further proposed Team-PSRO Mix-and Match which generates more joint policies by mixing individual policy from different PSRO iterations. They evaluate the proposed algorithms on Google Research Football and achieved better results than self-play.
Strengths: This paper is clearly written and easy to follow. The use of symbols and definitions in the theory part is in accordance with the standard notations.
Weaknesses: The novelty of the proposed algorithms is marginal, and more comparisons with closely related work are needed in the experiment. Please see below for detailed discussions.
1. Lack of novelty. Team-PSRO simply applies PSRO to two-team zero-sum games by learning a joint best response for the whole team, which has no difference from PSRO for two-player games and has been discussed in recent works like [4]. Team-PSRO-MM further proposes to mix individual policies in different iterations to generate more joint policies. However, mixing all individual policies will produce exponentially many joint policies and requires a lot more computation to get the payoff table. This paper simply mixes all policies or randomly samples policies to mix. It would help improve the novelty of this paper if the author can design a better way to smartly mix-and-match to produce joint policies that are most useful.
2. Need more baselines in experiments. For small games like 4-player Kuhn poker and Liar's dice, methods like NFSP [1], CFR [2] should be added as baseline. Though these methods are designed for two-player zero-sum games, they can be easily extend to two-team zero-sum games with some modification. For larger games like GRF, some closely related work like PSRO w. BD and RD [3], FXP [4] should be added as baseline. These methods also build on PSRO and report strong results in GRF full games. It is also straightforward to use NFSP in GRF.
[1] "Fictitious Self-Play in Extensive-Form Games." Heinrich, Johannes, Marc Lanctot, and David Silver.
[2] "Deep counterfactual regret minimization." Noam Brown, et al.
[3] "Unifying Behavioral and Response Diversity for Open-ended Learning in Zero-sum Games." Xiangyu Liu, et al.
[4] "Fictitious Cross-Play: Learning Global Nash Equilibrium in Mixed Cooperative-Competitive Games." Zelai Xu, et al.
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: 1. What is the difference between Team-PSRO and using PSRO by regarding the whole team as a single agent.
2. If there are n players in one team, and team-PSRO has run k iterations, then the number of joint policies produced by mix-and-match would be $k^n$, and it would require a lot of rollouts to complete the payoff matrix. Did the author produce all the joint policies in the experiments? Is there a better way to produce part of the joint policies that are most useful for training?
3. What is the rule of 4-player Kuhn poker and how many players are there in Liar's dice? These two games are originally two-player games and there is no description about how they are modified into two-team games.
4. In the experiment of GRF, how many iterations are trained for Team PSRO and Team PSRO-MM?Does the population start from random policy?
5. Comparison with NFSP and CFR in 4-player Kuhn poker and Liar's dice.
6. Comparison with PSRO w. BD and RD, FXP, NFSP in GRF.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 3 good
Contribution: 2 fair
Limitations: The authors have discussed the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We would like to express our gratitude to the reviewer for taking the time to assess our work and providing valuable insights. We would like to clarify certain aspects of our research that may not have been fully grasped, along with a plan to address the constructive suggestions.
**Lack of Novelty**
While it is true that our approach leverages the principles of PSRO, there are significant differences between applying PSRO to two-player games and two-team zero-sum games. The transition from the individual to team-based setting introduces complexities that need to be taken into account when designing algorithms. One consideration in the setting we study in this paper is that the players on the same team are not allowed to communicate with one another once the game has started. **A direct consequence is that methods like CFR are not applicable, unlike what is stated in the review in Weakness 2 and Question 5.**
We agree that a more sophisticated approach to mixing and matching could further improve performance. For example, we can select the top k strongest opponents (in this case k equals 4) after every iteration of evaluation and use these policies to get a mixed policy. Then we can add this policy to the population. We call this variant as Team PSRO-MM Top-K, and we have done additional experiments to compare our method with other baseline methods, including a deep RL version of fictitious team play [FCGS18] and PFSP [AlphaStar] (For PSRO w. BD and RD, it may take a lot of time to implement and train because they didn't reveal their code for GRF experiments.) **We have included these experiments in the rebuttal pdf and show that the PSRO-MM Top-k outperforms all other baselines.**
Thank you for the reference to Xu et al. That paper indeed proposes a similar algorithm to Team-PSRO but it came out after the NeurIPS deadline so we were not able to cite it for the submission. We will be sure to reference it as contemporary work in the camera-ready version. However, in that paper the authors do not make a connection to TMECor and do not propose the idea of Team-PSRO-MM.
Need for More Baselines in Experiments: We concur with the reviewer's observation that more baselines in experiments would provide a more comprehensive evaluation. **However, the reviewer’s suggestions for including methods like NFSP, CFR, and replicator dynamics do not make sense in our setting (see above).** NFSP, CFR, and replicator dynamics are all algorithms for two-player zero-sum games with perfect recall. They do not apply to two-team zero-sum games. **Although the suggested algorithms do not make sense in our setting, we have included new results with Fictitious Team Play [FCGS18]. As shown in the figure in our rebuttal pdf, we find that our methods outperform this baseline. For deep RL experiments, as mentioned above, we include baselines of deep RL versions of fictitious team play and PFSP, and show that we outperform them.**
**Clarification on Specific Questions**
Difference between Team-PSRO and single agent PSRO: Our setting considers games where team members cannot communicate during the game. As a result, Team-PSRO learns joint best responses for the team, taking into consideration the fact that team members cannot communicate. This differs significantly from considering the whole team as a single agent, which is equivalent to two-player zero-sum games with perfect recall. We study two-team zero-sum games which are equivalent to two-player zero-sum games with imperfect recall. In this setting, Team-PSRO converges to TMECor, instead of Nash as in two-player zero-sum games. We will add more discussion of this in the paper, although this is fairly well-known background information in the literature.
Concerns about joint policies in Team-PSRO: We acknowledge the computational challenges related to the mix-and-match strategy, and we will expand on the methods used in our experiments to manage these complexities. Your suggestion for a more strategic approach is well-received, and we have included a new variant called PSRO-MM Top-k which outperforms all other methods. However, for team games with two players per team, which capture many domains of interest such as bridge, the number of joint policies only scales quadratically. As shown in our paper, Team-PSRO-MM seems to perform well in practice and is a valuable contribution as-is.
Descriptions of 4-player Kuhn poker and Liar's dice: Your point is valid; we will provide detailed information on the modifications made to these games to adapt them to two-team formats in the revised version of the paper. These games are standard in the literature, for example in [FCGS18].
Experiment details: Each policy was trained for 1.5×10e9 iterations. Since the random policy doesn't know how to pass or shoot, which makes it hard to train(especially in imperfect information scenarios), we first used RL to train a random policy against build-in AI with easy difficulty, and stopped training when win rate reaches 40%. The aim of it is to enable the agent to learn basic behaviors (especially shooting). Then we used Self-play and Team-PSRO to train the pretrained model. Specifics about the number of iterations and the starting policies for Team-PSRO and Team-PSRO-MM will be included to enhance clarity.
Comparisons with other models: As noted above, NFSP and CFR are not valid algorithms for our setting, but fictitious team-play [FCGS18] is. We include additional experiments benchmarking against fictitious team play and show that our methods outperform fictitious team play.
[FCGS18] Gabriele Farina, Andrea Celli, Nicola Gatti, and Tuomas Sandholm. "Ex ante coordination and collusion in zero-sum multi-player extensive-form games." In: Advances in Neural Information Processing Systems (NeurIPS), 2018.
---
Rebuttal Comment 1.1:
Comment: Thank you for your response and clarification. Some of my concerns have been addressed and here is a follow-up discussion.
* Mix-and-match complexity: I agree with the authors that this is not a problem in games like Bridge, but it is a practical issue in GRF with 11 agents in each team. I'm glad to see the new variant called Team PSRO-MM Top-K and it is an empirical way to solve the problem.
* Description and Baselines for 4-player Kuhn poker and Liar's dice: the description of the modified team game would help the reader understand what method is applicable to these games, and I'm glad that the authors have taken my point. My previous concern is that these experiments only give results of the proposed algorithm and lack existing methods for comparison. The new result of fictitious team play serves as a good baseline.
* Baselines for deep RL experiments: I believe methods like NFSP are applicable in GRF and the needed change is to use cooperative RL instead of single-agent RL. However, as the authors have included new baselines like PFSP and fictitious team play, I think not adding my suggested baselines is acceptable.
I appreciate the authors' effort to solve my questions above, but my main concern is still the novelty of the team DO/PSRO algorithm. My reasons are below.
1. Minimal changes compared to DO/PSRO: if my understanding is correct, the only difference between team PSRO and PSRO is to use cooperative RL instead of single-agent RL for two-team zero-sum games (or to get joint BR instead of BR in team DO). This is a naive extension of PSRO and there is no new problem in doing so. In addition, the same algorithm has been used as a baseline in existing work like PSRO w. BD and RD, which makes me think the team DO/PSRO algorithm is of limited novelty.
2. The only theory result (Proposition 1) is a direct corollary of a well-known theorem. In Theorem 1 of the double oracle paper [1], it is proved that DO converges to an NE. Because team DO simply replaces best responses with joint best responses, and the solution concept is TMECor instead of NE, the proof of Theorem 1 in [1] can be directly used for Proposition 1 and the proof in Appendix A does follow the same argument. This makes the only theory result of this paper not informative.
Based on these two reasons, I think the first contribution described in L68, "We show that a straightforward extension of PSRO to team games converges to TMECor", is marginal. I would like to raise my rating to 4 based on the current discussions. And I'm willing to further raise the rating if my concern about the novelty is fully addressed.
[1] McMahan, H. Brendan, Geoffrey J. Gordon, and Avrim Blum. "Planning in the presence of cost functions controlled by an adversary." Proceedings of the 20th International Conference on Machine Learning (ICML-03). 2003.
---
Reply to Comment 1.1.1:
Title: Author Response
Comment: Thank you once again for your thoughtful comments and continued engagement with our work. We appreciate your acknowledgment of our efforts to address your concerns, and we would like to further clarify the points you raised regarding the novelty of our approach.
**Mix-and-Match Complexity**: We are pleased that you find our new variant, Team PSRO-MM Top-K, a satisfactory solution to the complexity issue in games like GRF.
**Description and Baselines for 4-player Kuhn Poker and Liar's Dice**: We are glad that you agree with our inclusion of fictitious team play as a baseline and the detailed description of the modified team games. We believe these additions will provide readers with a clearer understanding of our methodology and its comparative performance.
**Baselines for Deep RL Experiments**: Thank you for recognizing our efforts to include relevant baselines like PFSP and fictitious team play. We believe these comparisons provide a robust evaluation of our approach.
**Concerns About Novelty**:
**As we mentioned earlier, Xu et al. was published after the NeurIPS deadline, so it should not be used to argue against the novelty of our approach.** We acknowledge in the paper that Team-PSRO isn’t a radically new algorithm, but it forms the foundation of our other algorithms. Specifically, we claim our first contribution is showing that “*a straightforward extension of PSRO* to team games converges to TMECor.” The fact that we are the first to show this convergence is itself a meaningful contribution, regardless of whether it seems obvious in hindsight. **Most importantly, we introduce two additional algorithms: Team PSRO-MM and Team PSRO-MM Top-K.** Both of these algorithms represent substantial extensions and innovations compared to the two-player zero-sum PSRO algorithm. They outperform Team-PSRO and achieve state-of-the art performance on the domains we test.
We acknowledge that Proposition 1 follows a similar argument to Theorem 1 in [1]. However, the adaptation of this theorem to the context of TMECor in two-team zero-sum games is a meaningful contribution. While the proof may follow a similar structure, the application to a new domain and the demonstration of convergence to TMECor are valuable insights that extend the existing understanding of these algorithms.
**Conclusion**
In conclusion, we believe that the challenges of two-team zero-sum games, the innovative solutions we have developed, and the empirical success of our approach collectively contribute to the originality and significance of our work. We hope that this response further clarifies our position and addresses your concerns. We are committed to making any additional revisions necessary to ensure that our contributions are fully understood and appreciated. | Rebuttal 1:
Rebuttal: Details about these experiments have been included in individual responses. Full details will also be included in the camera-ready version.
Pdf: /pdf/f95db58d5071c8221164d5685710a1cf2c60d7bc.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Adversarial Learning for Feature Shift Detection and Correction | Accept (poster) | Summary: This paper introduces an invariance-based approach to out-of-distribution generalization where, for a given pair of distributions or domains $p$ a $q$, a subset of the input dimensions is filtered away so as to minimize some divergence between the filtered data, and then classifiers trained on samples from $p$ generalize to samples from $q$. The proposal leverages feature selection approaches to find a subset of the input space dimensions that minimizes a divergence between modified $p$ and $q$, as estimated by a domain discriminator. In addition, an adversarial approach is further introduced where features deemed responsible for the shift are corrected.
Strengths: - A very relevant problem is tackled with potential to practical relevance.
- The manuscript is very clearly written and easy to follow.
- The proposal applies well established approaches to features selection and estimation of distribution shifts.
Weaknesses: - Lack of contextualization with out-out-distribution generalization literature. Is the setting under consideration discussed in the OOD generalization literature? If so, it should be highlighted. It seems the type of shift under consideration is the standard covariate shift case, where data marginal distributions shift (in a particular way since only a subset of the input dimension shifts), and the divergence used for feature selecting is the standard H-divergence estimator (prediction accuracy of a domain discriminator) [1].
- Potentially hidden assumptions: extra assumptions over data conditional label distributions are required since resulting pruned features are domain invariant. In other words, $p(y|x \sim p) = p(y|x \sim q)$, which is not stated. Refer to [1,2] for discussion on settings and corresponding assumptions for OOD generalization cases.
- Empirical assessment limited to tabular data, and limited to a particular pair of distributions. That is, the resulting set of features is only invariant across the pair of domains used for selection, and little is known about what would happen had a new shifted distributed been observed.
- The proposal and the evaluation ignore representation learning techniques, which are an alternative to feature selection. Invariance-based approaches to out-of-distribution robustness typically project data onto a space where domains cannot be discriminated, with domain-adversarial approaches being a popular example of such an approach [3, 4].
- The motivation to correction approach is unclear to me. Are the resulting data any useful/meaningful after the correction approach is performed?
[1] Ben-David, Shai, John Blitzer, Koby Crammer, Alex Kulesza, Fernando Pereira, and Jennifer Wortman Vaughan. "A theory of learning from different domains." Machine learning 79 (2010): 151-175.
[2] Zhao, Han, Remi Tachet Des Combes, Kun Zhang, and Geoffrey Gordon. "On learning invariant representations for domain adaptation." In International conference on machine learning, pp. 7523-7532. PMLR, 2019.
[3] Ajakan, Hana, Pascal Germain, Hugo Larochelle, François Laviolette, and Mario Marchand. "Domain-adversarial neural networks." arXiv preprint arXiv:1412.4446 (2014).
[4] Albuquerque, Isabela, João Monteiro, Mohammad Darvishi, Tiago H. Falk, and Ioannis Mitliagkas. "Generalizing to unseen domains via distribution matching." arXiv preprint arXiv:1911.00804 (2019).
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: - How does the proposal assume labels shift across distributions/domains?
- How does features selection compare with invariant representation learning?
- Does inducing invariance across a pair of domains $p$ and $q$ influence invariance across other domains? Under which conditions?
- How would the feature selection approach fare in more complex structured data, such as images or text for instance? On could first encode the data with a pre-trained model and apply the proposal on top of resulting features.
- Could the authors clarify a bit the motivation on the correction approach? One would modify their data so it's "less different" than what is observed from a different data source, but then are the resulting corrected data more useful than the invariant subset of the features?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 3 good
Contribution: 2 fair
Limitations: Cf. weaknesses section for details. Main limitations are lack of contextualization with previous work on OOD generalization, and a limited evaluation. It's also unclear what would be the motivation for a correction approach such as the one proposed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Dear reviewer,**
**Thank you for your constructive suggestions. We believe there has been misunderstanding regarding the goals and scope of the paper. We introduce two systems: the first localizes faulty features, which can be used to detect incorrectly standardized or processed values or to localize malfunctioning sensors in multi-sensor environments. The second system proposes new values for the corrupted data, by performing divergence-reducing “imputation” of the data. All classifiers described in the paper are discriminators trained to classify between real and corrupted as part of the iterative processes of localization and correction.**
**Thus, your summary line “... and then classifiers trained on samples from p generalize to samples from q” is not performed at any point throughout the paper. While the proposed method could be used as a way to provide more homogenized datasets to train ML systems, training classifiers with out-of-distribution generalization is out of the scope of this paper and not directly comparable to our approaches. In fact, many of the datasets used do not have semantic labels, so training a classifier/regressor with them would not in any case be possible. Feature shift localization (and correction) is useful in many applications and settings and is an independent problem from OOD generalization. The paper [NeurIPS 2020] provides a similar motivation to our one, and we use it as our benchmark baseline.**
*“... Is the setting under consideration discussed in the OOD generalization literature? If so, it should be highlighted ...”*
**There has been a misunderstanding with respect to the problem that we are solving; OOD generalization is not directly comparable to our work. Our setting provides a mechanism to “detect” data errors and “fix” them, in an application-independent manner.**
**As feature shift localization (and correction) is related to multiple fields of statistics and ML, we are including an extended related work section in the appendix, where we are mentioning OOD generalization, outlier detection, and GANs among others.**
*“... extra assumptions over data conditional label distributions are required since resulting pruned features are domain invariant. In other words, p(y|x∼p)=p(y|x∼q), which is not stated.”*
**Our setting does not include any classification tasks, and there is no training of supervised ML methods for prediction. The “y” (label) is not part of our framework, in fact, most of the datasets do not have semantic labels.**
*“Empirical assessment limited to tabular data, and limited to a particular pair of distributions.”*
**Our work focused on tabular data as it is where the problem of feature shift localization most commonly arises. For example, this system is being used to homogenize tabular datasets for medical statistical analysis. We adopt a similar framework as in [NeurIPS 2020] where only two distributions are considered. The proposed method can be applied to more distributions/data sources by applying the system iteratively.**
*“The proposal and the evaluation ignore representation learning techniques... Invariance-based approaches to out-of-distribution robustness typically project data onto a space where domains cannot be discriminated ...”*
**We do not consider out-of-distribution robustness / representation learning as they are not directly comparable. Those techniques learn a new space that provides better generalization across domains for classification. Here we try to localize and fix erroneous features within datasets.**
*“Are the resulting data any useful/meaningful after the correction approach is performed?”*
**The correction approach can be seen as a “divergence-removing” imputation. The usefulness and meaningfulness of the corrected dataset will vary largely depending on the application and type of data, in a similar way to data imputation.**
*“How does the proposal assume labels shift across distributions/domains?”*
**We don’t solve a classification problem and there are no labels.**
*“How does features selection compare with invariant representation learning?”*
**While our system can generate datasets that could provide more invariance for downstream classification, our approach differs from invariant rep. Learning, as this learns a representation of the input so it has invariance between domains. Here we try to localize and correct a dataset with corrupted features.**
*“Does inducing invariance across a pair of domains p and q influence invariance across other domains? Under which conditions?”*
**We are trying to detect faulty features and provide corrected datasets; we are not learning an invariant representation.**
*“How would the feature selection approach fare in more complex structured data, such as images or text for instance?”*
**Applying feature selection methods directly to natural images could lead to poor results. However, the problem we are trying to solve arises most commonly in tabular data.**
*“Could the authors clarify a bit the motivation on the correction approach? One would modify their data so it's "less different" than what is observed from a different data source, but then are the resulting corrected data more useful than the invariant subset of the features?”*
**Similar to data imputation, the usefulness of the corrected features will depend on the nature of the data and the application at hand. For example, in order to compute statistics and correlations between features, using a corrected dataset can help to reduce some bias that might appear if the corrupted dataset is used.**
**We hope that we have been able to clarify the goal and scope of the paper and you will consider increasing the score.**
**[NeurIPS 2020] Kulinski, Sean, Saurabh Bagchi, and David I. Inouye. "Feature shift detection: Localizing which features have shifted via conditional distribution tests."**
---
Rebuttal Comment 1.1:
Title: Response to authors
Comment: Thank you for the clarification. Indeed, there has been misunderstanding on my end, likely due to similarity between the proposal and invariant representation learning approaches. In particular, the reference/query classifier estimates what that robustness literature often refers to as the H-divergence, and learning representations minimizing that divergence is a common approach to enable training predictors on a data source and testing on another. I clarify that dropping shifting-causing features is what I refer to as invariant representation learning, but realize now that's not in scope, and only localization is tackled (plus fixing the shifting dimensions). I would still recommend for authors to contextualize their work with respect to that literature given the similarity. I do now have concerns regarding the scope being too narrow, though considering the clarifications and the other reviews, I will increase my score.
---
Reply to Comment 1.1.1:
Comment: **Thank you for reading our rebuttal, providing such a fast response, and increasing the score!**
**We completely agree that feature shift localization/correction has some similarities with invariant representation learning, and we are including a detailed extended related work section in the appendix describing them, as well as providing some related literature that was left out due to space limitations in the main text. We hope that this can inspire future work looking deeper into the relationship of feature selection, shift localization, and invariant representation learning. Regarding the scope of the paper, we adopted a similar scope and framework as in [NeurIPS 2020], which we believed was a good approach for this work.**
**[NeurIPS 2020] Kulinski, Sean, Saurabh Bagchi, and David I. Inouye. "Feature shift detection: Localizing which features have shifted via conditional distribution tests."** | Summary: The authors tackle the problems of feature shift localization and correction, i.e., identifying columns leading to divergence between two distributions, and imputing new values in their place, leading to lower divergence. For the first task they employ a random forest classifier trained to predict between samples coming from the reference or query distributions. The corrupted features are identified based on the feature importance scores of the random forest models.
For the correction task, a set of proposed corrections is generated per-sample, and the correction yielding the lowest probability under a classifier trained to distinguish between the distributions is selected as the imputation.
Strengths: 1. The proposed method is outperforming all the baselines
2. the underlying models are relatively simple and computationally efficient
3. The authors provided the full code in the supplementary
Weaknesses: 1. Most importantly, how were train/test splits performed? The text implies performance is measured directly on the train tests. Analyzing the code for the feature correction task it seems that the models were indeed evaluated directly on the training data.
2. Several experimental details are missing. Why was the optimal hyperparameter configuration selected only for a subset of the baselines in the feature localization tasks? How were hyperparameters selected for the baselines for the feature correction tasks?
3. The paper could be written in a clearer way at times. Especially the description of both methods (Sections 4 and 5) are somewhat convoluted, in particular the last paragraph of Section 5. what are the **k** classifiers and the CatBoost classifiers trained to predict? How are the values of B generated? How is the objective of equation 9 optimized? The method could have been described clearer in the main text, instead of describing e.g., the total variance distance for the case not applicable to practice.
4. Error bars of the results are not present, contrary to what has been specified by authors
Please note that my rating of the paper is mostly influenced by points 1 and 2, as they seem to invalidate the empirical results presented by authors.
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: The main weakness of the paper are the missing experimental details, i.e., points 1. and 2. of Weaknesses.
For 1., it could very well be that I misunderstood the code and/or the text. If so, please indicate the fragments of attached codebase, were the train/test splits are indeed performed. In my understanding, judging for example by lines 461-470 in correct/src/scripts/run_benchmarks.py, the same dataset is used for model fitting as well as evaluation.
For 2., authors should clarifiy how they selected hyperparameters for all baseline methods.
Other questions:
1. What is the motivation behind definining the threshold as $\kappa D$ (lines 213-214 in text)? Is it not possible for a single feature to cause high divergence?
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 3 good
Limitations: -
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **We want to thank the reviewer for the insightful comments. We want to address each of the questions and concerns:**
*“Most importantly, how were train/test splits performed? The text implies performance is measured directly on the train tests. Analyzing the code for the feature correction task it seems that the models were indeed evaluated directly on the training data.”*
**We believe that the reviewer has a perfectly reasonable misunderstanding. In the task of feature shift correction, there is no need for a train/test split. The evaluation process is similar to data imputation evaluation: algorithms take as input a matrix (concatenated reference+query) with some missing (or corrupted) values, and the algorithm outputs the same matrix with the missing/corrupted values replaced. The evaluation is done by taking the imputed/corrected matrix (the output matrix) and comparing it with the matrix before any corruption/missingness was present (the ground truth matrix). As long as the algorithm does not see the “ground truth matrix” during the prediction of the “output matrix”, and only sees the “corrupted input matrix”, the evaluation setting is perfectly valid, and it is the common setting adopted in missing data imputation tasks. Both shift detection and correction tasks follow a similar setting, where the training/testing split is not required. As our algorithms train classifiers internally, as part of the filtering and correction process, we acknowledge that there can be confusion regarding the lack of common train/test split used in regular supervised classification/regression tasks, therefore, we will add some clarifying text in both the main text and appendix to avoid causing any confusion to the readers.**
*“Several experimental details are missing. Why was the optimal hyperparameter configuration selected only for a subset of the baselines in the feature localization tasks? How were hyperparameters selected for the baselines for the feature correction tasks?”*
**We selected the optimal hyperparameters by doing a grid search for all methods in both localization and correction tasks. We acknowledge that this was not properly explained in the paper so we are including this information in the main text, and all the details of the search in the appendix. Note that in the localization task, some methods are marked with “*” indicating that the methods make use of the number of corrupted features as extra input, making the comparison between the other methods not fair.**
*“The paper could be written in a clearer way at times. Especially the description of both methods (Sections 4 and 5) are somewhat convoluted, in particular the last paragraph of Section 5. The method could have been described clearer in the main text, instead of describing e.g., the total variance distance for the case not applicable to practice.”*
**Thanks for the comment! We agree that a lot of the theoretical part could be moved in the appendix and more details could be included in the main text. We are restructuring the paper to include a more detailed description of the methods in the main text.**
*“what are the k classifiers and the CatBoost classifiers trained to predict?”*
**All the classifiers trained in localization and correction tasks are trained to classify the samples as “corrupted” vs “non-corrupted” (i.e. reference vs query).**
*“How are the values of B generated?”*
**B is a set that consists of the feature values within the reference, the feature values after performing imputation with linear regression and nearest neighbor, and a shuffle of feature values from the reference panel. We are adding an additional description of its construction in the appendix.**
*“How is the objective of equation 9 optimized?”*
**Equation 9 is a simple combinatorial problem. We use the discriminator to evaluate every sample with each proposal b in B, and simply select the b that provides the highest probability of being non-corrupted.**
*“Error bars of the results are not present, contrary to what has been specified by authors”*
**Thank you for pointing this out; we did not provide a plot with error bars. While performing the whole evaluation benchmark for both localization and correction multiple times is infeasible and expensive (as it would require around 2 months of computing resources), we are including a plot when performing the evaluation of both localization and detection with multiple random seeds for a given dataset, showcasing the variability between runs for each method. The plot is added in the appendix and provided in the attached pdf (attached figures 3 and 4).**
*“What is the motivation behind definining the threshold as kD (lines 213-214 in text)? Is it not possible for a single feature to cause high divergence? “*
**After normalizing and sorting the features, we select the features with the highest importance until their cumulative sum surpasses tD. Where 0<“t”<1 is a fixed hyperparameter and “D” is the empirical divergence (which is typically non-decreasing). You can imagine “D” as acting as a “learning rate scheduler” where, as the filtering process evolves, forces that fewer features are filtered at each step, to avoid incorrectly removing incorrect features, and “t” acts as a scaling factor. It is completely possible to have a single feature causing high divergence. If that is the case, it is likely that the normalized sorted features will look like a 1 followed by 0s (Note that the normalized and sorted features add to 1), because tD < 1, the first feature will be selected but not the following ones, leading to a correct detection.**
**We hope that we have addressed all your doubts and concerns, that we have provided enough arguments showing the validity of the adopted evaluation setting, and that you will consider increasing the score.**
---
Rebuttal Comment 1.1:
Comment: Thank you for your clarifications.
My two main concerns were regarding:
1. The lack of train/test splits.
2. Experimental details.
As for 1., as I am not deeply familiar with the literature on data imputation, I will trust the authors that it is a common (and valid) evaluation practice.
As for 2., you mention you performed a grid search over all hyperparameter settings. Based on which metric did you select the optimal settings?
---
Reply to Comment 1.1.1:
Comment: **Thank you for reading our reply and providing such a fast response!**
*Based on which metric did you select the optimal settings?*
**We used the F-1 score to select the best hyperparameters for the feature shift localization task. For the feature shift correction task, we used the HP-Divergence (D_hp). We use this one instead of W2 and Symmetric KL Div for two reasons: (1) HP-divergence has a bounded range (goes from 0 to 1). On the other hand, W2 and Symmetric KL values can be very large for some methods and datasets, and lead to biased results (averaged across datasets) for some baselines. (2) the HP-Divergence can be used to provide lower and upper bounds of the total variation divergence (which is the one used throughout the theoretical setting). We have included all these details in the main text, as well as a better justification for the selected evaluation metrics in the appendix.**
**We hope that this addresses all your concerns, and you will consider upgrading the score. Please let us know if you have any further questions or suggestions.** | Summary: This paper proposes a method called Datafix that tries to identify and correct feature shifts in datasets.
This method is composed of two distinct algorithms
DataFix-locate which first identifies what features have shifted between datasets and Datafix-Correct which tries to correct for these features shifts.
Datafix-locate learns a random forest discriminator which is trained to classify reference data from query data. Here the reference data is the non-shifted data while the query data is the data to be tested and corrected for feature shifts.
This discriminator classifier is used to obtain an approximation to the likelihood ratio between the reference and query data distributions. This approximation is then used to estimate Total Variation (TV) distance between the reference and query data.
The mean decrease of impurity for the random forest classifier is used to estimate importance scores for all features within the data. The top $K$ important features (the top $K$ features whose cumulative importance scores are greater than the scaled TV-distance between the reference and the query datasets) are removed, and the entire procedure is iterated for a set number of times.
All the features throughout this iterative process are the "located features" which have shifted between the reference and the query datasets.
Datfix correct then tries to fix these corrupted or discarded features by first imputing these features through different methods (linear classifiers ,knn,random replacement from reference data) and computes the TV distance between the query data after imputation and t reference data. If this divergence is less than a set threshold, the imputed values are selected as the final corrected values. Otherwise, different "proposals" values for these corrupted features are constructed (which consist of different feature values from reference data as well as previously imputed queries. The proposal value which leads to the corresponding query having the highest probability of being a non corrupted sample by the discriminant classifier. The selected proposal values are then chosen to replace the corrupted features. This process is repeated until the TV distance between the reference data and the corrected data is less than a threshold.
Extensive experiments are performed to validate this method. The proposed method is able to outperform baselines on both locating the corrected manipulated features (measured through F1 score) and correcting these features (measured through divergence measures )
Strengths: - This paper addresses an important problem which is of great significance to the community.
Featureshift detection is an important problem and this paper proposes a promising method to both identify these shifts and correct for these shifts.
- The method is validated on numerous datasets under numerous feature shift settings (though most of these settings are enforced)
- The experimental setting and results also demonstrate that DataFix can even correct for correlated shifts in features
- The proposed method performs much better than existing baselines on correctly identifying and correcting feature shifts
Weaknesses: - I think the presentation of the work can be improved. Particularly the layout of the paper. There is a lot of discussion in the main paper on material that is not essential to the paper. This particularly includes section 4 (page 4 line 161 ), such as the discussion on f-divergences (yes, granted there are relevant but not sure how much relevant given only TV is used throughout the paper), and the discussion on mutual information (page 5, line 189 onwards), equation (4).
This causes a lot of important material to be pushed to the appendix, particularly how the proposals are generated when correcting for corrupted features, or how the algorithms for Feature selection and feature correction work (as provided in the details on the appendix).
This, in my opinion, hurts the readability of the paper and makes the proposed methods appear much more complicated than it is. It's very difficult to understand the paper without going through the details in the appendix. I would encourage authors to consider improving the readability of the paper by moving parts of appendix D and E to the main paper and maybe moving some of the discussion on f-divergences and the relationship between feature selection and shift localization to the appendix.
I would be happy to revise my score if there important reasons for or against this suggestion and I look forward to your response.
- Some of the details, such as how the proposal values are constructed, can be a bit confusing In their current form and would request the authors to provide a small example in the paper (or the appendix) to help explain this.
- Metric for evaluating feature correctness. The authors propose using divergence measures to evaluate how close the corrected query values are to the reference values. This might be fair but from a practical standpoint, I think the purpose of correcting for feature shifts is to ensure the performance of a classifier (or some other ML model) is the same on the reference dataset as that on the corrected dataset. I think such classifier results which evaluate the difference between performance before and after correction would have been helpful.
The corrected features might have small overall divergence between, but I am not sure this guarantees that class conditioned divergence between the reference and the query data is small. I will be happy to raise my score if the authors could elaborate on this choice of evaluation metric for feature correctness.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: - The experiments show that that under different fraction of shifted features (0 to 0.20), Data-Fix performs much better than other methods. How would the performance scale even under more sever shifted features ? When do you think Data-Fix could break down?
- The paper mentions that GANs are not used to produce the corrected features as they are not suitable for tablular data. But some of the datasets used such as MNIST (image data) and simulated cosine and ploynomial (functional data) are data modalities where NNs can do well. Do you think a GAN based adversarial approach for feature correction could perhaps perform better in these scenarios?
- A lot feature shift scenarios were implemented within the paper. Though, are there feature shift scenarios which are more realistic and are perhaps encountered more often in real world settings? Are there existing datasets where these feature shifts naturally occur (rather than generating feature shifts as done in the paper)?
- The appendix provides details on computational running times for Data-fix. Are there details for the computing infrastructure used for running these experiments in the appendix? I think these details would be helpful to better understand these computational runtimes.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The authors mention in the main paper that limitations are discussed in the appendix, but I wasn't able to find discussion on this.
I think any discussion on the limits of DataFix and when it could fail could be helpful for readers and potential users of DataFix.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Dear Reviewer,**
**Thank you for the highly constructive suggestions. We want to address each of the questions and concerns:**
*“the presentation of the work can be improved… lot of discussion in the main paper is not essential … moving parts of appendix D and E to the main paper and maybe moving some of the discussion on f-divergences and the relationship to the appendix. I would be happy to revise my score...”*
**We completely agree that many important details of the method are in the appendix and that non-essential theoretical information could be moved from the main text. Therefore, we are moving material from sections 4 and 5 to the appendix and adding a more detailed description of the methods (currently in the Appendix) into the main text. Furthermore, with the extra available page upon acceptance, we will try to place the algorithm boxes into the main text.**
*“how the proposal values are constructed, can be a bit confusing ... would request the authors to provide a small example in the paper”*
**We agree that it can be confusing to follow the details of the methods, therefore, we are including multiple figures (see attached pdf) in the appendix to provide a better intuition of the inner workings of localization (attached fig 1) and correction methods (attached fig 2). We are including two additional diagrams describing the methods, which do not fit in the attached rebuttal pdf.**
*“The authors propose using divergence measures to evaluate …, I think the purpose of correcting for feature shifts is to ensure the performance of a classifier is the same on the reference as on the corrected dataset … small overall divergence between, not guarantees that class conditioned divergence is small. I will be happy to raise my score if the authors could elaborate on this”*
**While in localization/detection tasks there are well-stablished evaluation metrics, in the distribution shift removal problem, the evaluation procedures are not so standardized. We selected the given evaluation metrics because (1) we wanted application-independent metrics, as the corrected datasets might be used in many scenarios (statistical analysis, visualization, machine learning). For example, the proposed algorithm is already being used to homogenize biomedical data in which statistics and correlations are estimated for scientific discovery. Furthermore, some of the datasets do not even have semantic labels, making an evaluation with downstream classification/regression tasks impossible. (2) As both data imputation and shift correction are closely related tasks (each tries to replace missing/corrupted values with new ones), we wanted to adopt metrics from the imputation literature. Recent papers [a] have used W2 as an imputation evaluation metric, and we included estimates of the symmetric KL divergence and the HP-divergence as they are computationally tractable, and appeared to be sensible estimates of the quality of the shift correction results. Note that we don’t include MSE as it doesn’t properly characterize divergence between distributions (see Appendix C). Finally, note that in imputation literature, application-independent metrics are used.**
*“under different fraction of shifted features Data-Fix performs much better. How would the performance scale? When could break down?”*
**It depends a lot on the type of manipulation. For example, if the mean of just one feature is shifted largely, DataFix can detect it without problem, despite only one feature being present. For more challenging manipulations, such as removing the correlation between features, a small number of features can make DataFix fail. The true underlying divergence between distributions (which is typically unknown) will partly indicate when DataFix will fail. Figure 18,19 in the appendix, which includes an example of how the true and predicted divergences evolve through the filtering process, might help to get a better intuition.**
*“GANs are not used. But some of the datasets used are data modalities where NNs can do well. Do you think a GAN based adversarial approach for feature correction could perform better?”*
**We do include a GAN-based method for feature correction: GAIN, which combines reconstruction and adversarial losses. However, the method did not provide competitive results and did not scale well for more large dimensional datasets.**
*“A lot feature shift scenarios were implemented. are there feature shift scenarios which are encountered more often? where these feature shifts naturally occur?”*
**We are encountering many of the proposed shifts when using the system in a biomedical application. Features collapsing to 0 or being negated are not uncommon, especially within genomic sequences. Furthermore, bad standardization (e.g., incorrectly encoding features with the metric or imperial system) can lead to mean shifts. We are open to suggestions for other shifts so we can incorporate them in future works.**
*“Are there details for the computing infrastructure in the appendix?”*
**We forgot to add the hardware details in the appendix, thanks for noticing that! All experiments were done with the same compute resources which included an Intel Xeon Gold with 12 CPU cores. We are adding all this information in the appendix.**
*“The authors mention in the main paper that limitations are discussed in the appendix, but I wasn't able to find this”*
**The description of the results includes some discussion of the limitations (e.g., which manipulation types tend to be detected incorrectly). However, we agree that a section in the appendix dedicated to discussing the limitations of the method would benefit the paper. We are including such a section.**
**We hope that by addressing your concerns you would consider raising the score. Thank you for helping us improve the quality of the manuscript!**
**[a] ICML, 2020, Jarrett, Daniel, et al. Hyperimpute: Generalized iterative imputation with automatic model selection.**
---
Rebuttal Comment 1.1:
Comment: Thank you for your detailed reply. I have increased my score. I would be willing to further increase my score if the authors could further elaborate on my 3rd response.
Please see my response below
1. Thank you for considering my suggestions on improving the readability of the paper. I think these changes could greatly help readers better understand this work (and it seems that some of these changes were also suggested by Reviewer 5LeU )
2. Also, these new figures in the attached pdf file, particularly MNIST example on proposed figure, are helpful in getting a better understanding of the paper.
3. Thank you for the clarification on why divergence measures to evaluate the proposed method's performance, particularly how a few datasets might not make it possible to evaluate classification/accuracy scores.
I think a few sentences on this relation between the used evaluation metrics and imputation evaluation metrics in the paper could be helpful.
Though I think there are certainly a few datasets where labels are available (MNIST, UCI datasets). Also it is certainly possible to generate labels for synthetic data. Additionally, there are other imputation methods, particularly for biomedical applications, that report classification/regression results. [A] is just one example of this (and you cite this in related work section on feature shift correction).
I also think that the relationship between imputation and feature shift correction is a bit more subtle. Imputation can certainly be considered 'correction', but feature shifts can potentially lead to scenarios that are more involved than missing samples and imputing them. In the introduction (page 2, line 12,13), the relationship between correction and imputation or alignment is made. Alignment is a much more broad concept than the citations [12,13] suggest (which also just use the divergence scores, also both of these papers are by the same author. Thus the citation for alignment covers a very narrow alignment area).
I think feature correction is inherently a necessity for downstream tasks. I mean that is why they are called 'features' in my opinion, features for some downstream task.
I realize that the rebuttal period is very brief to conduct experiments on a large scale.
Though I think future works on feature correction should report classification/downstream tasks.
This is something that community should consider. I would like to know your opinions on this.
4. Thank you for pointing to GAN baseline.
[A]Yoon, Jinsung, James Jordon, and Mihaela Schaar. "Gain: Missing data imputation using generative adversarial nets." International conference on machine learning. PMLR, 2018).
---
Reply to Comment 1.1.1:
Comment: **Thank you for your response and for upgrading the score! Your reviews have been very useful in improving the manuscript!**
After considering your arguments about downstream evaluation tasks and carefully looking at the GAIN paper, we agree that such evaluation would benefit the quality of the manuscript. Therefore, **we are including an evaluation of downstream classification and regression tasks as a new section in the appendix**.
Due to time and compute limitations we have only been able to perform the evaluation for two datasets (Musk2 and Energy), but we will expand it to the other datasets that include classification or regression labels. Instead of using linear models as done in the GAIN paper, we use LightGBM Regressor/Classifier for the downstream evaluation because (1) it reflects a more realistic setting in data analysis, and (2) it makes use of higher-order relationships between features, which linear models fail to capture.
**We train the downstream models with the query datasets and evaluate them in the reference dataset**. We perform the experiment using each query dataset version (the original pre-corrupted dataset, and the correct datasets with each correction method) and evaluate the classifier/regressor when all features are used (100% in the tables below), when only 50% of the non-corrupted features are included, and when 0% of non-corrupted features are included (only the corrected features are used). We report balanced accuracy for classification and root mean square error for regression in the tables below.
**Table 1: Downstream classification task with Musk2 dataset, including all corrected features with all non-corrupted features (100%), with half of the non-corrupted features (50%), and without non-corrupted features (0%).**
| method | Balanced Accuracy (0%) | Balanced Accuracy (50%) | Balanced Accuracy (100%) |
|----------------|------------------------|-------------------------|--------------------------|
| Original Query | 0.951 | 0.960 | 0.964 |
| DataFix | 0.902 | 0.953 | 0.957 |
| HyperImpute | 0.883 | 0.939 | 0.958 |
| LR | 0.871 | 0.932 | 0.938 |
| ICE | 0.848 | 0.928 | 0.938 |
| INB | 0.833 | 0.946 | 0.952 |
| KNN | 0.815 | 0.925 | 0.934 |
| Sinkhorn | 0.806 | 0.948 | 0.963 |
| MIRACLE | 0.786 | 0.931 | 0.953 |
| MLP | 0.786 | 0.905 | 0.934 |
| MissForest | 0.779 | 0.908 | 0.944 |
| GAIN | 0.717 | 0.887 | 0.911 |
| DD | 0.550 | 0.930 | 0.945 |
| SoftImpute | 0.508 | 0.855 | 0.855 |
| Mean | 0.500 | 0.952 | 0.961 |
**Table 2: Downstream regression with Energy dataset.**
| method | RMSE (0%) | RMSE (50%) | RMSE (100%) |
|----------------|----------|------------|-------------|
| DataFix | 0.089 | 0.082 | 0.080 |
| Original Query | 0.093 | 0.084 | 0.082 |
| KNN | 0.105 | 0.094 | 0.091 |
| Sinkhorn | 0.109 | 0.091 | 0.085 |
| SoftImpute | 0.114 | 0.145 | 0.167 |
| INB | 0.116 | 0.106 | 0.101 |
| Mean | 0.122 | 0.088 | 0.083 |
| DD | 0.128 | 0.122 | 0.119 |
| HyperImpute | 0.130 | 0.122 | 0.123 |
| GAIN | 0.130 | 0.170 | 0.206 |
| MissForest | 0.147 | 0.155 | 0.156 |
| LR | 0.191 | 0.189 | 0.205 |
| MLP | 0.212 | 0.225 | 0.233 |
| ICE | 0.217 | 0.225 | 0.234 |
| MIRACLE | 0.334 | 0.350 | 0.352 |
Methods are sorted by their performance without non-corrupted features (0%). The classification/regression downstream performance when using **DataFix surpasses the competing methods** in most settings, in some cases even surpassing the performance of directly using the original query dataset (ground truth pre-corruption).
**We hope that these additional results help to improve the quality of the manuscript and that you will consider further increasing the score. Thank you for all your excellent feedback!** | Summary: This paper studies the problem when a subset of coordinates in the features have shifts in distributions. It specifically proposes algorithms to detect the shifts, and methods to “repair” the shifted coordinates.
Imo, the most interesting part of the paper is that it relates certain inequalities in divergence measures with GAN via likelihood ratio test. Under this interpretation, the problem of identifying distributional shifts becomes tuning some classifiers. Their experiments also confirm that this approach is effective.
Then the second part of repairing the corrupted coordinates smell a bit hackish, e.g., it proposes a few candidate solutions, then it runs some classification algorithms to determine which candidates to use. Nevertheless, it appears to me that the central idea is to use unchanged features to “complete” the changed ones, which I believe is plausible. In general, this part feels heavy lifting and I do not have much intuition on that.
Overall, I believe this is a quite solid contribution to neurips.
Strengths: it makes an interesting connection between distributional shift and GAN.
Weaknesses: Second part sounds a bit hackish. I also have a few questions (see below).
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors:
I have a few major concerns/questions.
This work seems not very careful on the expository text. For example, statistical distinguishable does not imply computational distinguishable (foundation of modern/any cryptography?). So D(p, q) > \epsilon does not always mean that they can be detected.
Divergence measures are asymmetric. Can you comment about this, e.g., do we have two thresholds or always put the original distribution to the left?
When we can “complete” the features with shifted distributions for training and/or inference purpose, does that mean these features are not important and can be excluded anyway?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: NA.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Dear Reviewer,**
**Thank you for the thoughtful review! We agree that some of the introduced heuristics can feel a “bit hackish”, however, they have proved to be more accurate than previous works and than our attempts with more conventional supervised learning methods. One way to get a better intuition for the correction system is to compare it with a k-NN imputer: a k-NN imputer fills the missing feature values with a combination of the top-k closest samples with respect to L2 distance on non-missing features.** **Our correction system replaces the corrupted feature values with the top-1** **“candidate” with respect to the probability estimated by the discriminator.** **In a sense, the correction task is a divergence-reducing imputation problem.** **We are including a more in-depth discussion, as well as additional figures (see attached pdf), to provide better intuition to the reader and to further motivate our approach.**
*“statistical distinguishable does not imply computational distinguishable”*
**Definitely, thank you for catching this error, we have re-phrased the text to avoid making this mistake.**
*“Divergence measures are asymmetric.”*
**While many f-Divergences are asymmetric (e.g. KL-Divergence), there are a few that are in fact symmetric, such as the “Hellinger distance” or the “Total Variation” used in this work.**
*“When we can “complete” the features with shifted distributions for training and/or inference purpose, does that mean these features are not important and can be excluded anyway?”*
**Deciding to use corrected features or simply discarding them will depend largely on the application at hand. For example, this method (using corrections) is currently being deployed for biomedical applications (for instance genomics), where simple statistics and correlations are desired from the features for scientific discovery. Using corrected features allows one to use all of the data, while not distorting the statistical estimates as severely as using the corrupted data would.** | Rebuttal 1:
Rebuttal: **We would like to thank all reviewers for their constructive comments,**
**First, we are restructuring the text to improve the readability of the paper. As suggested by the reviewers, we are including more details about the proposed methods in the main text, and moving some theoretical results to the appendix. Additional figures (fig 1,2,5 in the attached document) are included in the appendix providing better intuition for the design details of the methods. Furthermore, we are including in both the main text and appendix more details regarding the evaluation setting, hyperparameter search, and computational resources, as well as an additional figure with error bars showing the variability between random seeds (see fig 3,4 in the attached pdf).**
**We want to clarify the motivation for the proposed work: while presented jointly, both the localization and correction work can be useful by themselves. The shift localization can be used to detect incorrectly formatted features, errors arising during data processing, or in multi-dimensional sensor applications, detecting malfunctioning sensors. The work presented in [Neurips 2020] provides a similar setting and set of motivations and acts as our benchmark baseline. The shift correction system can be used in any application where simple imputation would be used, providing a replacement for corrupted/missing values that minimize an empirical divergence. Deciding to use one or both systems is completely dependent on the application. We are modifying the text accordingly to properly depict the motivation of the work.**
**While the proposed methods can be used to extend training datasets for training other ML models (e.g., supervised classification or regression), they are not limited to such tasks, and in fact, many of the datasets used do not have labels, making an evaluation using downstream ML classifiers impossible. In fact, our system is currently being deployed in a biomedical setting in order to homogenize data that is later used to compute statistics and correlations between features for scientific discovery. Therefore, we adopt a more appropriate evaluation setting, independent of the application where DataFix is applied, as is commonly done in related tasks such as data imputation [ICML 2020]. Finally, we want to note that the evaluation setting adopted is correct, no train/test split is required (as in data imputation tasks), and we have a set of datasets (simulated data) for hyperparameter tuning and a set for purely testing (real datasets). We are modifying the text accordingly to properly justify the adopted evaluation setting.**
**The attached pdf with figures include:**
- Figure 1: examples of feature importance during the iterative localization process with different parameters.
- Figure 2: number of features filtered at each iteration, with different hyperparameters.
- Figure 3: error bars for shift correction.
- Figure 4: error bars for shift localization.
- Figure 5: example of proposals in MNIST dataset for shift correction.
**Two extra diagrams providing more details of our proposed methods are included in the appendix.**
**We hope that we have addressed all of your concerns. As the method provides accurate results, it is useful in many applications (in fact, it is already being deployed), and your suggestions have helped us to largely improve the text and structure of the manuscript, we believe that our work is a strong submission to NeurIPS, and we hope that you will consider raising the score.**
**[NeurIPS 2020] Kulinski, Sean, Saurabh Bagchi, and David I. Inouye. "Feature shift detection: Localizing which features have shifted via conditional distribution tests."**
**[ICML 2020], Jarrett, Daniel, et al. Hyperimpute: Generalized iterative imputation with automatic model selection.**
Pdf: /pdf/725317c6a33bf48b7c82eb31909a6d9d2fe7b367.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Normalization Layers Are All That Sharpness-Aware Minimization Needs | Accept (poster) | Summary: The paper proposes an adversarial perturbation method linked to SAM for the affine normalization parameters, in contrast to perturbing the full set of parameters. The results show this approach improves upon standard SAM and prior sparse SAM approaches.
Strengths: The SAM-ON proposed method yields several benefits:
1. improves upon SAM-all and carious other SAM-variants
2. achieves this by perturbing only the normalization layers, which correspond to a small percentage of the total parameters
Weaknesses: Weaknesses of this paper include:
1. gains in Table 1 are marginal
2. originality of approach is lacking given prior literature on SAM variants
3. justification of SAM-ON is poor; the experiments in 5.2 and Table 7 try to poke at this but fall short as the authors do not provide a solid explanation as to what this can be attributed to; they only show that SAM-ON increases sharpness in L-inf sense; but what about other metrics of flatness? is there a more universal metric that can explain the effectiveness of SAM-ON?
4. it is unclear whether SAM-ON is truly better than other SAM variants in out of distribution robustness, e.g. WILDS benchmark
5. results are only shown for CIFAR and ImageNet datasets; more datasets and benchmarks need to be considered
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: see weaknesses above
One of the questions this paper fails to answer is why sharpness is higher for SAM-ON, even though its formulation is motivated to minimize sharpness. This is not a complete enough story and opens up lots of questions that need to be addressed.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 2 fair
Contribution: 2 fair
Limitations: Somewhat discussed but limited. Would like to see a more thorough limitations discussion.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your review. Following your suggestion, we are happy to extend the limitations section of the paper. In case there is specific work you think would need to be included, we would much appreciate a pointer. Below we address your other comments.
1. __“gains in Table 1 are marginal”__:
SAM-ON consistently strongly outperforms base optimizers and its gains compared to SAM are often >1% for ResNet architectures on CIFAR data. Similar or no gains were also observed in other SAM variant papers, such as ASAM and FisherSAM, so the amount of gain observed is within expectations. Further, we provide additional experiments on VGG-Nets and DenseNet for CIFAR data in Table 1 in the rebuttal pdf and observe that SAM-ON clearly outperforms SAM-all. Finally, our gains are even more impressive for ViTs (Table 3, main paper), which we believe to be very important as SAM has shown particular success in this domain (Chen et al., 2022).
2. __“originality of approach is lacking”__:
To illustrate the originality of our approach (as also corroborated by reviewer dJUQ), we briefly recap here the most closely related works and how they differ from our approach. The most related works are by: 1) Frankle et al., 2021, which specifically studies the BatchNorm parameters by only training BatchNorm, but in a very different setting: without SAM and all other parameters are frozen. 2) Mi et al., 2022 which propose a sparse SAM approach, but again the approach and aim is very different: they do not highlight the importance of specific parameter groups, whereas we stress the unique role played by the normalization layers allowing for an extremely sparse yet effective perturbation, and their aim is to reduce the computational cost, not to enhance generalization (we outperform them in both). 3) Much of the literature on SAM proposes new variants based on designing better perturbation models by taking reparametrization invariances (like ASAM) or loss geometry perspectives (like Fisher-SAM) into account. We think that the fact that our simple heuristic “perturb only the normalization layers” aids all of those methods is quite remarkable. We hope this addresses the reviewer’s concerns. If the reviewer is aware of further literature relating SAM to the normalization layers of a network we would much appreciate a specific pointer.
3. __“other metrics of flatness”__:
Andriushchenko et al., 2023 performed a large-scale study on the relation between a range of sharpness measures and generalization, finding little correlation. For CIFAR data, "_the best correlation [...] is achieved by_ $\ell_\infty$ _adaptive worst-case sharpness with logit normalization for a small_ $\rho$". Since this metric is additionally very close to the perturbation model of ASAM elementwise $\ell_\infty$ we chose to focus on this particular metric in our paper. Following your suggestion, we investigated more sharpness measures (Table 3 in rebuttal pdf) and find in alignment with our previous findings that SAM-ON is sharper than SAM-all with respect to most metrics (especially worst-case sharpness and the metrics that are optimized during SAM), although there exist some exceptions. Results for more models and $\rho$ values are similar (omitted due to space limitations) and will be included in the revised paper.
4. __“a universal metric that can explain the effectiveness of SAM-ON”__
There is a long history of trying to find metrics that correlate well with generalization performance. One of these is sharpness, which we focused on in this paper. Our work lends evidence to recent findings (Andriushchenko et al., 2023) that sharpness may not always correlate well with generalization performance. The search for alternative explanations for SAM’s success is an active area of research, for example recent work (that appeared on arXiv after the NeurIPS submission deadline and hence not included in our related work section) suggests that the use of SAM prunes a significant number of activations and leads to low-rank features (arXiv:2305.16292). Our work contributes to this search for understanding the fundamental reasons for the success of SAM by illustrating the important role played by the normalization layers.
5. __“OOD robustness”__:
In Table 2 in the rebuttal pdf we have provided experiments for training Vision Transformers from scratch on ImageNet using SAM-all/SAM-ON with base optimizers AdamW and Lion, and evaluated those on OOD test sets (ImageNet-Sketch, ImageNet-R). We find consistent improvements of SAM-ON over SAM-all on ImageNet, ImageNet-Sketch, and ImageNet-R.
6. __“more datasets and benchmarks”__:
For the selection of our datasets and benchmarks we followed the literature on SAM and its variants. Additionally, we include in the rebuttal pdf (Table 1 and 2) results for OOD robustness as suggested by the reviewer, ViTs trained from scratch on ImageNet as suggested by reviewer Ftd6, and more models on CIFAR as suggested by reviewer wExQ. For all of these we observe consistent gains of SAM-ON over SAM-all. We hope this addresses your concern, but do let us know if there is anything that is not adequately supported.
7. __“why is sharpness is higher for SAM-ON, even though its formulation is motivated to minimize sharpness.”__
Opinions differ on whether our findings that SAM-ON may increase sharpness are trivially true or unexpected, e.g. as mentioned by Reviewer wExQ: “The observation that .. [SAM-ON] .. holds the higher sharpness than original SAM might be trivial. SAM are designed to reduce the sharpness, while a degraded version of SAM should hold a higher sharpness in principle.” This is exactly why we believe it is important to study sharpness in our work because it is not obvious how we would expect it to behave with respect to SAM-ON. Our findings lend support to the idea that sharpness may not be the sole reason for SAM’s success and aid the search for alternative explanations (Andriushchenko et al., 2023).
---
Rebuttal Comment 1.1:
Comment: I thank the authors for their rebuttals. While I understand the recent paper "Towards Understanding Sharpness-Aware Minimization" also notes that convergence to flat minima with SAM is incomplete, this paper does not move the needle forward much. There is a lack of theory and metrics related to the normalization layers that support the proposed method SAM-ON. In my view, there is still a lack of a solid reasoning about the contributions of this paper. I choose to keep my score for now.
---
Reply to Comment 1.1.1:
Comment: Thank you for your response.
We are disappointed that the new experiments that you asked for (OOD robustness, other sharpness metrics, more benchmarks) and our explanations (table 1 gains, originality, universal metric, sharpness) are not acknowledged or discussed in your new response.
With regards to your new comment: we emphasize that our contribution is fundamentally different from the study of _"Towards Understanding Sharpness-Aware Minimization"_. The latter shows that the better generalization of SAM or ASAM is mostly not correlated with finding models with flat minima. Our work highlights the important role played by the normalization layers: we show that perturbing only the normalization layers (less than 0.1% of the total parameters) enhances or at least maintains performance for SAM and all other SAM variants we considered. This is complementary to the paper _"Towards Understanding Sharpness-Aware Minimization"_ by working towards an alternative explanation on why SAM and its variants work.
We would greatly appreciate it if you could elaborate on what you consider a _"lack of solid reasoning about the contributions of this paper"_ so we can engage in a discussion. | Summary: As Sharpness-Aware Minimization (SAM) aims to regularize the flatness of the loss landscape for better generalization, this paper shows that only perturbing the normalization layers is sufficient to achieve this. To prove this, the authors first propose a method called SAM-ON (SAM-OnlyNorm). Then, they conduct experiments on CIFAR datasets across different models and variants of SAM, comparing the test accuracy with and without applying SAM-ON. The experiments (in most cases) show that SAM-ON outperforms vanilla SAM, as well as SAM variants with ON outperforming SAM variants. Furthermore, since SAM-ON only needs to perturb a few parameters, it can save computational resources. Finally, this paper conducts numerous experiments to understand SAM-ON and concludes that sharpness may not be the key to SAM.
## post-rebuttal
I've updated my score since my concerns are adequately addressed.
Strengths: 1. This paper demonstrates a new understanding of the underlying mechanism of SAM, which is both novel and informative.
2. The proposed SAM-ON can improve efficiency and generalization in most cases.
3. The experiments are adequate to support the claims.
4. The code is provided, which guarantees reproducibility.
Weaknesses: 1. SAM-ON cannot outperform SAM (or its variants) in some cases, showing that SAM-ON may not always be effective and decreasing the soundness of the claims that support SAM-ON.
2. The improvement of SAM-ON on ImageNet seems to be consistently lower than on CIFAR datasets for different architectures. Therefore, this reviewer hypothesizes that SAM-ON only works for small datasets and is not very effective for large datasets.
3. As the author acknowledges, this paper only investigates SAM and SAM on vision data. Thus, the analysis cannot support the proposed understanding that generalizes to other tasks, such as language tasks.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: See Weaknesses.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 4 excellent
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 9: Very Strong Accept: Technically flawless paper with groundbreaking impact on at least one area of AI/ML and excellent impact on multiple areas of AI/ML, with flawless evaluation, resources, and reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your review and helpful suggestions.
We address your concerns below.
1. __SAM-ON may not always be effective__
We considered a wide range of different models (see Table 1 in rebuttal pdf for even more models), data augmentations, and datasets. For almost all of these a SAM-ON variant obtained the best test accuracy. There are indeed some exceptions, but even in those settings we find it remarkable that by only applying SAM to the normalization layers, this approach can so strongly outperform base optimizers. We believe this is a valuable contribution to the literature in the search for an enhanced understanding on the mechanisms behind SAM’s success.
3. __The improvement of SAM-ON on ImageNet seems to be lower than on CIFAR datasets__
Additional results for ViTs trained from scratch on ImageNet can be found in Table 2 in the rebuttal pdf. We observe that SAM-ON outperforms SAM in this setting for both the AdamW and Lion optimizers, and both with respect to the ImageNet test set and OOD test sets. Further, we agree that the results for ImageNet and ResNet are less impressive than for ViT, but would like to highlight that other SAM variants such as SSAM, ESAM, GSAM, and FisherSAM also obtain only marginal (or no) improvements in this setting, so this is somewhat within expectations. We hope this addresses your concern.
5. __language tasks__
Thank you for the suggestion. We focused on vision data following the trend in the literature on SAM and its variants. However, a few papers do indeed study language tasks and we would be happy to include an experiment on e.g. the IWSLT’14 DE-EN machine translation task that was considered in the ASAM paper, in the camera-ready version. Please do not hesitate to let us know if there were specific experiments you had in mind.
---
Rebuttal Comment 1.1:
Comment: Dear Authors,
Thanks for the rebuttal. I believe that most of my concerns have been addressed, and I would like to raise my score when review editing is allowed.
Here are some further questions for curious.
In the context of adversarial robustness, a recent work aims to understand SAM as implicit adversarial training and revealed that SAM is beneficial for improving robustness [1]. On the other hand, there are several works attributing adversarial vulnerabilities of DNNs through normalization layers, and designed normalization layer-based adversarial training methods to improve adversarial robustness [2,3].
My questions are:
1. Can SAM-ON and the corresponding observations in this paper be the connection between normalization layers and adversarial robustness?
2. Can using SAM-ON only improve adversarial robustness?
Feel free to dismiss these questions if you are not familiar with adversarial robustness. However, I believe discussing these questions in this paper can help you strengthen the understanding of the effectiveness of SAM-ON.
[1] Sharpness-Aware Minimization Alone can Improve Adversarial Robustness. ICML 2023 workshop
[2] Intriguing properties of adversarial training at scale. ICLR 2020
[3] Batch Normalization Increases Adversarial Vulnerability and Decreases Adversarial Transferability: A Non-Robust Feature Perspective. ICCV 2021
---
Reply to Comment 1.1.1:
Comment: Dear Reviewer,
Thank you very much. We are grateful for your appreciation of our paper and for your valuable suggestions and references related to adversarial robustness. This is definitely something that we are keen to explore. Following your suggestion, we will start experiments to investigate SAM-ON with respect to adversarial robustness. We will try to include the results and a discussion on potential insights into the connection between SAM, the normalization layers, and adversarial robustness, into the revised paper. Thank you again for bringing these references to our attention.
---
Rebuttal Comment 1.2:
Title: Copy: Official Comment by Reviewer dJUQ
Comment: Dear Authors,
Thanks for the rebuttal. I believe that most of my concerns have been addressed, and I would like to raise my score when review editing is allowed.
Here are some further questions for curious.
In the context of adversarial robustness, a recent work aims to understand SAM as implicit adversarial training and revealed that SAM is beneficial for improving robustness [1]. On the other hand, there are several works attributing adversarial vulnerabilities of DNNs through normalization layers, and designed normalization layer-based adversarial training methods to improve adversarial robustness [2,3].
My questions are:
1. Can SAM-ON and the corresponding observations in this paper be the connection between normalization layers and adversarial robustness?
2. Can using SAM-ON only improve adversarial robustness?
Feel free to dismiss these questions if you are not familiar with adversarial robustness. However, I believe discussing these questions in this paper can help you strengthen the understanding of the effectiveness of SAM-ON.
[1] Sharpness-Aware Minimization Alone can Improve Adversarial Robustness. ICML 2023 workshop
[2] Intriguing properties of adversarial training at scale. ICLR 2020
[3] Batch Normalization Increases Adversarial Vulnerability and Decreases Adversarial Transferability: A Non-Robust Feature Perspective. ICCV 2021
---
Dear authors, please paste the response below this comment. Thanks
---
Reply to Comment 1.2.1:
Title: Copy: Official Comment by Authors
Comment: Dear Reviewer,
Following your suggestion, we have performed a preliminary study on the adversarial robustness of SAM-all and SAM-ON (with a WRN28-10 on CIFAR100). We find that both SAM-all and SAM-ON can significantly improve over SGD-trained models with respect to adversarial robustness. Interestingly, SAM-ON is slightly more robust than SAM-all, whereas ASAM-elem.-$\ell_\infty$-all is slightly more robust than ASAM-elem.-$\ell_\infty$-ON, but the differences are small and often within the standard deviation (reported over 3 seeds). Overall, we find that in order to get SAM-like improvements for adversarial robustness (as shown in https://arxiv.org/pdf/2305.05392.pdf) it is enough to only perturb the normalization layers in SAM, illustrating again their special role as outlined in our paper. These results are preliminary, but we found them interesting enough to report them already. Thank you again for the suggestion!
| threat model | $\epsilon$ | SGD | SAM-all | SAM-ON | ASAM-el.-$l_\infty$-all | ASAM-el.-$l_\infty$-ON |
|----------------------------------|------------|---------------------|---------------------|------------------------------|------------------------------|-----------------------------|
| $\ell_2$ | $0.10$ | $18.14 ^{\pm 0.11}$ | $28.14 ^{\pm 1.09}$ | $ {31.28 }^{\pm 0.50}$ | ${30.33 }^{\pm 0.80}$ | $30.16 ^{\pm 0.26}$ |
| $\ell_2$ | $0.20$ | $2.33 ^{\pm 0.11}$ | $5.39 ^{\pm 0.34}$ | ${6.62 }^{\pm 0.07}$ | ${6.63 }^{\pm 0.12}$ | $6.10 ^{\pm 0.18}$ |
| $\ell_2$ | $0.50$ | $0.01 ^{\pm 0.01}$ | $0.06 ^{\pm 0.02}$ | ${0.07 }^{\pm 0.01}$ | ${0.10 }^{\pm 0.01}$ | $0.07 ^{\pm 0.02}$ |
| $\ell_\infty$ | $1/255$ | $10.29 ^{\pm 0.04}$ | $17.96 ^{\pm 1.08}$ | ${19.56 }^{\pm 0.33}$ | ${20.69 }^{\pm 0.81}$ | $18.63 ^{\pm 0.30}$ |
| $\ell_\infty$ | $2/255$ | $0.67 ^{\pm 0.01}$ | $1.96 ^{\pm 0.17}$ | ${2.16 }^{\pm 0.07}$ | ${2.62 }^{\pm 0.01}$ | $2.05 ^{\pm 0.17}$ |
| $\ell_\infty$ | $4/255$ | $0.01 ^{\pm 0.01}$ | $0.05 ^{\pm 0.01}$ | ${0.05 }^{\pm 0.00}$ | ${0.10 }^{\pm 0.01}$ | $0.05 ^{\pm 0.01}$ |
| Clean acc. | | $80.6^{\pm0.2}$ | $83.0^{\pm0.3}$ | ${84.0}^{\pm0.2}$ | $83.3^{\pm0.2}$ | ${83.9}^{\pm0.2}$ | | Summary: This paper relates the normalization layer with the Sharpness-Awareness Minimization (SAM). Surprisingly, this paper finds that in the perturbation stage, only perturbing the affine parameters of normalization layers in the networks leads to a better generalization performance. Later, the authors investigate the reason behind such surprising phenomenon, and find that SAM-ON(only perturbing the normalization layer) even increase the sharpness, which doubts the intuition that the effectiveness of SAM comes from sharpness minimization. Extensive experiments demonstrate the effectiveness of the methods this paper proposed.
Strengths: 1. This paper identifies an important yet interesting problem: Is Sharpness-Awareness minimization effective because of minimizing sharpness? This paper found that only perturbing the normalization layer in the perturbation stage of SAM leads to both superior generalization performance and higher sharpness. The interact relationship between sharpness and generalization are quite doubtful.
2. This paper investigates the SAM from the normalization layer and found the normalization might be the reason for the success of SAM, which is quite surprising.
3. SAM with only perturbing the normalization layer largely saves the computational cost than the original SAM which needs perturbing all the parameters.
Weaknesses: 1. These experiments should be conducted on more network architectures, including VGG-Net and etc.
2. The observation that SAM with only perturbing the normalization layer holds the higher sharpness than original SAM might be trivial. SAM are designed to reduce the sharpness, while a degraded version of SAM should hold a higher sharpness in principle.
3. In this paper, it is clear that there is deep relationship between normalization layer and generalization, and there are indeed some papers on the relationship between normalization layer and generalization. Therefore, a related discussion should be included.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. In fact, the authors don't explain why SAM with only perturbing normalization layer works better than original SAM. Therefore, I would like the authors to include some discussion or theoretical analysis on that.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Yes. This paper discusses the limitation of this paper, where they only focus on the vision data.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your review and helpful suggestions.
We address your concerns below.
- __“more network architectures, including VGG-Net”__:
Following your suggestion, we have provided results for more neural network architectures: DenseNet and various VGG-Nets in Table 1 in the rebuttal pdf. Across the different architectures and SAM variants we consistently observe that SAM-ON outperforms regular SAM.
- __“observation that … [SAM-ON] .. holds the higher sharpness than original SAM might be trivial. SAM are designed to reduce the sharpness, while a degraded version of SAM should hold a higher sharpness in principle.”__:
We agree this intuition is sensible, however prior work by Mi et al., 2022 hypothesizes that sparse SAM approaches can actually lead to a flatter landscape than regular SAM. It was thus worth investigating if SAM-ON (with its targeted, high sparsity level approach) would still be able to reduce sharpness. We will extend the discussion in the paper.
- __“there is deep relationship between normalization layer and generalization .. Therefore, a related discussion should be included.”__
We agree. Some discussion on the connection between normalization layers and their affine parameters with generalization can already be found in the related work section, but we are keen to extend this further. If you have any specific papers in mind please let us know and we would be happy to discuss them.
- __“the authors don't explain why SAM with only perturbing normalization layer works better than original SAM.”__:
We believe it is an important question for future work to understand the mechanism behind the success of not only SAM-ON but also regular SAM as our work lends support to the idea that sharpness-reduction may not be the sole source of SAM’s success. The search for alternative explanations for SAM’s success is an active area of research, for example recent work (that appeared on the arXiv after the NeurIPS submission deadline and hence not included in our related work section) suggests that the use of SAM prunes a significant number of activations and leads to low-rank features (arXiv:2305.16292). Our work contributes to this search by illustrating the success of only applying SAM to the normalization layers. We provide additional experiments in Figure 1 in the rebuttal pdf that explore potential connections of SAM-ON with weight decay and dropout. We find that SAM-ON’s improvements are not due to its interaction with weight decay (Figure 1, bottom left), nor can its success be mimicked by applying dropout solely to the normalization layers (Figure 1, top). We will include this discussion in the revised paper.
---
Rebuttal Comment 1.1:
Title: Thank you for response
Comment: I have thoroughly read the response and appreciate the detailed response to my questions.
- I have noticed the additional experiments on VGG-nets and DenseNets and the experimental results are sound to me.
- I agree that it is worth investigating that SAM-ON could still be able to reduce sharpness.
- I agree that it is important to study the underlying relationship between SAM and BN and would like to any future work on that.
Regarding the authors' response and other reviewers' comments, I have raised my rating.
---
Reply to Comment 1.1.1:
Comment: Dear reviewer,
Thanks for reading our rebuttal and checking our new results. We are happy we could address your concerns. | Summary: This paper tries to analyze the effects of different layers for sharpness-aware minimization (SAM). The authors find that normalization layer plays an important role in the improvements of SAM and only perturbing the normalization layer can obtain a comparable result. Therefore, this paper proposes SAM-ON and the experimental results illustrate that SAM-ON can obtain a great performance on many different tasks with different models.
Strengths: Strengths:
1. This paper focus on an important problem, SAM is very important for improving generalization.
2. The experimental finding is very interesting and can efficiently reduce the computation.
3. The authors try to evaluate the performance of proposed method on various datasets and models.
Weaknesses: Weakness:
1. Although the proposed method can improve the performance on CIFAR. But CIFAR is too simple for SAM and that is weak to illustrate the improvement of SAM-ON. However, the results about resnet on imagenet is too close for me and the improvement is minimal. I'm not sure whether you run the experiments multiple times with different seeds.
2. The experiments on ViT is also a little weak for me. For ViT, cifar is too simple and you should train ViT form scratch on imagenet. Since SAM can achieve significant improvement on ViT+ImageNet and maybe you need to analyze it to obtain more convincing results.
3. Although SAM-ON can reduce the computation, the training efficiency cannot be significantly improved.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: Question:
1. Do you try to run the experiments for multiple times with different seeds.
2. You say only perturbing normalization layers can obtain similar results with perturbing all parameters. My question is do you try to only perturb the parameters outside the normalization layer?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 2 fair
Contribution: 2 fair
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your review and helpful suggestions.
We address your concerns below.
- __ImageNet results__:
We agree with your comments on the ImageNet results. As you suggested, we have run a ViT from scratch on ImageNet (with additional evaluations on ImageNet-R and ImageNet-sketch) with both the Lion and AdamW optimizers and find that SAM-ON consistently outperforms SAM-all in this setting (see Table 2 in attached rebuttal pdf), we will include this in the revised paper. Our ImageNet results at present are indeed for a single seed (all other experiments are with multiple seeds), as we prioritized showing results across different SAM variants. Following your comment, we have started runs with different seeds. Preliminary results show very little variability (standard deviations of 0.047% and 0.007% for SAM-all and SAM-ON with RN50 and elementwise $\ell_\infty$). We will ensure to include multiple seeds for all results in the revised paper. Further, we agree that the results for ImageNet and ResNet are somewhat less impressive than for ViT, but would like to highlight that other SAM variants such as SSAM, ESAM, GSAM, and FisherSAM also obtain only marginal (or no) improvements in this setting, so this is within expectations.
- __“CIFAR is too simple for SAM”__:
We would like to draw the reviewer’s attention to e.g. Table 1 of our paper, where it can be seen that for CIFAR-100 regular SAM (denoted SAM-all) can improve by a large margin over the vanilla optimizer. For several model types, CIFAR-100 is far from being solved. Therefore, the consistent improvement of SAM-ON over SAM-all across different models (see Table 1 in rebuttal pdf for additional VGG-Nets and DenseNet) and SAM variants on CIFAR data is remarkable and we believe worth reporting. We hope this in combination with the novel ImageNet and ViT results addresses the reviewer’s concerns, but please do not hesitate to let us know if you believe further experiments are missing.
- __“Although SAM-ON can reduce the computation, the training efficiency cannot be significantly improved”__:
In Table 5 we observe that SAM-ON reduces wall-clock time compared to SAM by about 17%. We agree this is not a large improvement, although it does improve upon the sparse SAM approach by Mi et al., 2022. Further computational gains ($\approx$ 30%) may be achieved for our approach by applying SAM to only _part of_ the normalization layers. Although this leads to some reduction in test accuracy, the approach still strongly outperforms the base optimizer (see Figure 1, bottom right in rebuttal pdf). We are happy to include this discussion in this paper, but should highlight here that the main aim of our work was to shed light on the remarkable success of SAM and to illustrate the important role of the normalization layers in obtaining its enhanced generalization performance and the connection with sharpness. Although the reduced computation obtained using SAM-ON is a positive side-effect, it was not the main aim of our work and hence was not listed as one of our main contributions on p2.
- __“My question is do you try to only perturb the parameters outside the normalization layer?”__:
Yes, this is denoted as no-norm (dotted lines) in Figure 1, 2, 6, and 7. no-norm typically performs similar to SAM-all, except for some perturbation variants like elementwise-$\ell_2$, where we observed a drastic drop in test accuracy. We briefly discuss this in Section 4 and 5.3, but will extend this discussion in the revised paper.
---
Rebuttal Comment 1.1:
Comment: Dear reviewer,
Thanks again for your review and helpful comments.
Apart from the layerwise perturbation model, all runs with three random seeds for ResNet50 on ImageNet have finished (see table below). Following your suggestion, we will include those in the revised paper.
Further, we would like to draw your attention to our previously posted rebuttal, in which thanks to your suggestions we show:
- consistent gains with a ViT trained from scratch on ImageNet using both Lion and AdamW as base optimizers, including an evaluation on OOD datasets.
- a discussion on the significance of our results, especially given the new experiments on CIFAR and ImageNet data reported in the rebuttal pdf.
- that a trade-off for further computational gains at the cost of somewhat reduced accuracy gains can be achieved by only applying SAM to part of the normalization layers.
- that _"only perturb the parameters outside the normalization layer"_ is shown as no-norm results in our paper.
Given the approaching end of the author-reviewer discussion period, we wanted to make sure our rebuttal addressed your concerns. Are there further questions we could help you with?
| top-1 | SGD | SAM | ESAM | GSAM | elem. $l_{2}$ | elem. $l_{2}$ | elem. $l_{\infty}$ | elem. $l_{\infty}$ |
|-------|--------------------|--------------------|--------|--------|--------------|--------------|--------------------|--------------------|
| | | all | all | all | all | ON | all | ON |
| ResNet-50 | $77.03^{\pm0.13}$ | $77.65^{\pm0.11}$ | $77.05$ | ${77.20}$ | $77.65^{\pm 0.05}$ | $\textbf{77.82}^{\pm 0.14}$ | $77.45^{\pm0.04}$ | $\textbf{77.82}^{\pm0.01}$ |
---
Rebuttal Comment 1.2:
Title: Thanks for your effort to make this paper more clear!
Comment: Thanks for your response and effort to make this paper more clear!
I still have several concerns:
1. ImageNet results: I find the performance gain about imagenet training from scratch in your attached pdf is a little limited. For example: top 1 ACC: 71.33 -> 71.41. In my past experience about SAM, the improvement is usually significant for ViT training, such as gsam. So I think maybe the proposed method is not vet suitable for vit training.
In addition, I find the most experiments are about ResNet, such as table 1,2,3. However, the accuracy on imagent is very close (SAM vs SAM-ON), although the improvement on cifar is clear. So I think maybe the proposed method can work well on simple tasks, but it is still not very clear for more complex tasks.
Finally, I think big model may be easier to converge to sharp minima and we need to focus more on these big models and datasets. CIFAR is not enough for me.
2. Only perturb the parameters outside the normalization layer: Thanks for your clarification. As you mentioned, no-norm typically performs similarly to SAM-all. Based on that, whether I can say the parameters outside the normalization layer provide the most properties that SAM can improve the performance? So that means the parameters outside the normalization layer are all you need and the normalization layer has some special properties. If that is true, maybe you need to provide more analysis.
Overall, I think these paper mainly focus on the experimental analysis about different layers for SAM. So I think the author need to provide more results and analysis to make us convincing.
---
Reply to Comment 1.2.1:
Comment: Dear reviewer,
Thank you for your response, we will address your concerns below.
- Regarding your overall comment _“I think the author need to provide more results and analysis to make us convincing.”_:
Following the suggestions of all reviewers, we provided the following experiments in the rebuttal:
1. ViTs trained from scratch and evaluated on ImageNet and OOD datasets.
2. Multiple seeds for ResNet results on ImageNet.
3. Further computational speed-up can be achieved by only perturbing part of the normalization layers.
4. More models on CIFAR (3 VGG variants + DenseNet).
5. Evaluation of more sharpness measures.
6. SAM-ON achieves similar adversarial robustness as SAM (table provided in a comment to reviewer dJUQ).
7. Interaction with weight decay.
8. SAM-ON’s success can not be mimicked by applying dropout solely to the normalization layers.
In your original review you asked for specific experiments and clarifications (ImageNet from scratch with ViT, training efficiency, multiple seeds, perturb all but the normalization layers), which we all provided in our rebuttal (bulletpoint 1-3 above + explanations on no-norm). In addition to the analysis in the main paper (Section 5, we investigate sparsity, sharpness, the effect on the affine parameter values and different training stages), we explored potential connections of SAM-ON with weight decay and dropout and its robustness to adversarial perturbations after insightful remarks by reviewer wExQ. We would further like to stress that the mechanism behind vanilla SAM’s success is still not well understood and that our work lends support to the idea that sharpness-reduction may not be the sole source of SAM’s success, which fits well into recent findings (https://arxiv.org/pdf/2302.07011.pdf and https://arxiv.org/abs/2305.16292) and hence contributes to an improved understanding of the method.
If there are any other specific experiments you would like to see, please let us know and we will gladly provide these in the revised paper.
Regarding your specific comments:
- _”most experiments are about ResNet, such as table 1,2,3.”_:
Table 3 shows ViTs, and the gains are often even clearer than for ResNets. Further ViT results can be found in Table 4 (for fine tuning on ImageNet) and the rebuttal pdf (for ImageNet training from scratch + OOD evaluations).
- Regarding 1. (ImageNet results):
Our ImageNet results (for both ResNet and ViT) conclusively show that perturbing only the normalization layers (<0.1% of the total parameters) can outperform or at least perform on par compared to perturbing all the parameters. This is highly surprising and we believe an important finding that should be shared with the community as it provides new insights into SAM, given that the original motivation behind SAM is a sharpness measure defined with respect to _all_ parameters, and minimizing the respective measure only for an extremely small fraction of the total parameters would intuitively not lead to the same or even improved generalization benefits. There is already a significant improvement between SAM and the vanilla optimizer, and hence we think that any improvement of SAM-ON over this is fairly remarkable. Further gains can be found by changing the base SAM variant used (in Tables 1 and 3 we show that SAM-ON works for a range of different SAM-variants) and we would be happy to include results with GSAM in the revised paper (unfortunately this won’t be possible by the end of the discussion period, which is tomorrow).
- Regarding 2. (no-norm results)
Thank you for your new comment on the no-norm results. We believe it’s an important ablation study, which is why we included it in our paper. However, it is somewhat within expectations that perturbing all but the normalization layers, i.e., >99.9% of the parameters, would often give similar performance as SAM, given the findings of Mi et al., 2022. On the contrary, what is surprising is that perturbing only the normalization layers gives similar and in almost all cases better performance than SAM. We would also like to draw your attention to the fact that omitting the normalization layers _can_ lead to drastic performance decreases, as we show in our paper for ASAM elementwise-$\ell_2$. | Rebuttal 1:
Rebuttal: We would like to thank you all for your time and useful comments. We are encouraged that you found that our paper addresses an interesting and important problem by demonstrating _"a new understanding of the underlying mechanism of SAM, which is both novel and informative”_ with _"extensive experiments [that] demonstrate the effectiveness of the methods"_.
Following your suggestions, in the pdf below we attach results for _“ViT [trained] from scratch on imagenet”_ with our SAM-ON method and also evaluate _“out of distribution robustness”_ (Table 2). For all datasets (ImageNet, ImageNet-R, ImageNet-sketch) and both Lion and AdamW optimizers, we observe that SAM-ON outperforms SAM-all. We also include results _“on more network architectures, including VGG-Net”_ (Table 1) and DenseNet on CIFAR data and again observe SAM-ON’s superiority across different SAM variants.
We address your other comments below and look forward to engaging with you in the discussion period.
Pdf: /pdf/2096de40cbea29920e252b88866fa90174960025.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
DrugCLIP: Contrastive Protein-Molecule Representation Learning for Virtual Screening | Accept (poster) | Summary: This paper proposes a new virtual screening framework DrugCLIP, which aims to identify molecules most likely to bind to a target.
Inspired by the recent multi-modal learning approach CLIP, the authors propose to utilize contrastive learning for learning representations of molecules and proteins.
Authors reformulate the drug binding problem based on the representations into an information retrieval task.
Strengths: - By adopting a self-supervised multi-modal learning approach to binding prediction problems, DrugCLIP outperforms previous work that utilizes labels which is costly in time and finance.
- Augmentation technique inspired from domain knowledge is proposed, which is novel.
- Extensive experiments have been conducted, including human evaluation.
Weaknesses: - More detailed explanation of the experimental setting will be helpful for the readers.
- In line 109, the authors mentioned "an end-to-end manner," which may mislead the readers that the whole framework is. It should be clarified more since the DrugCLIP is a two-phased method.
- In biological data, many missing data exist, which should not be considered a negative sample, as done in section 3.3. Recent self-supervised learning approaches deal with such false negatives that can also be adopted in this framework.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: In my knowledge, the inherent principle in self-supervised learning is that "multiple views of the image should be consistent in representation space." This principle can also be applied to multi-modal learning: "representation of text and images that share the same semantic should be consistent in representation space."
But I'm not sure why DrugCLIP works in the sense that binding molecules and proteins should be closer for binding. Do you have any intuitive explanation for that?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Provided in Weakness section.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ## Response to Reviewer GQNT
We sincerely thank the reviewer for the positive feedback. Your support and encouragement are greatly appreciated. We have addressed all your concerns in the following responses.
### Q1: Detailed explanation of the experimental setting
We apologize for any inconvenience that may have been caused while reading our original paper. Most of the experimental settings are provided in the Appendix to save space. We will list some of the key experimental settings here.
For the virtual screening experiments, we mainly evaluate our model on DUD-E and LIT-PCBA. The evaluation metrics we applied are AUROC, BEDROC, and EF.
We trained our model on PDBBind and ensured that proteins with the same pdbid were excluded from the test set. As for the baseline models, all reported metrics were obtained directly from the original papers. We used the ADAM optimizer, with the learning rate equals to 0.001. We trained our model on 4 A100 GPUS with a batch size of 4x48. We select the ckpt with best valid BEDROC and the patience is set to 20.
Regarding the Efficiency Analysis, we conducted tests on our methods, docking, and other baseline methods using a single A100 GPU and CPU with 128 cores. Due to the extensive time required to run molecular docking on the full Enamine dataset, we opted to perform the docking on a smaller subset. Using the average docking time from this subset, we estimated the resource usage on the entire dataset.
### Q2: Improper notion of "an end-to-end manner"
For sure, we adopt a two-phase procedure for training the DrugCLIP model. We will revise our paper and remove the notion of "an end-to-end manner" for better presentation. Thanks for pointing it out!
### Q3: Many missing data require sampling technique to reduce false negatives
We sincerely thank you for offering us this valuable insight. We totally agree that simply adopting in-batch negative sampling might raise problems when there are a lot of missing data. However in this filed, only an extremely small proportion of molecules exhibit binding affinity towards the target protein in the vast chemical space. Consequently, the likelihood of sampling false negative data remains comparatively low. Besides, it is essential to note that contrastive learning diverges from supervised learning methods that rely on binding affinity labels.
Contrastive learning possesses the capability to discern the relationship between positive and negative examples by leveraging the condition that the majority of negative instances exhibit inferiority when compared to the positive ones. Hence, contrastive learning exhibits a greater capacity for noise tolerance as compared to directly regressing to a specific metric as in supervised learning.
To the best of our knowledge, certain prior studies have employed molecule similarity to filter out similar structures. However, this approach may not be suitable when considering the cliff effect. In our future research, we aim to explore sampling techniques inspired by biological domain knowledge to address this limitation and enhance the accuracy of our methodology.
### Q4: Intuitive explanation for binding molecules and proteins should have close representations
There is a compelling rationale behind this statement, drawing from both biological and machine-learning perspectives.
In the realm of drug discovery, widely known aphorisms, such as 'similar molecules bind to the same pocket' and 'similar pockets bind to the same molecules,' have significantly influenced drug design [1,2]. Inspired by these intuitions, it is reasonable to project molecules and pockets into a shared latent space and align their representations from binding pairs. Upon observing pocket-ligand structures at a more intricate level, we often find numerous mutually corresponding features, such as H-Bond donors/acceptors and positive/negative charges. These features substantiate the notion that binding molecules and proteins should exhibit close representations, underpinning our approach with strong biological knowledge support.
From an alternative perspective, the notion of binding molecules and proteins can be likened to the relation between user and item in recommender systems, whose objective is to recommend appropriate items to users. The user-item relationship mirrors the Target-molecule association. Similar to the aforementioned aphorisms, similar users may like the same items, and similar items may pique the interest of the same user. Throughout the evolution of recommender systems, collaborative filtering and latent factor models have been widely used to quantify the relation between user and item, employing a similarity function based on latent representations [3]. It is evident that our conjecture, that binding molecules and proteins should possess closely aligned representations, resonates quite strongly.
[1] Hoffmann et al, A new protein binding pocket similarity measure based on comparison of clouds of atoms in 3D: application to ligand prediction, 2010, BMC bioinformatics
[2] Chackalamannil et al, Comprehensive medicinal chemistry III, 2017, Elsevier
[3] Koren et al, Matrix Factorization Techniques for Recommender Systems, 2009, Computer
---
Rebuttal Comment 1.1:
Title: Thank you for author rebuttal
Comment: We sincerely thank the authors for their effort in addressing my concerns. Also, domain knowledge on **why contrastive learning on molecules and proteins** would be helpful for readers in understanding the effectiveness of contrastive learning in molecule-protein representation learning.
Therefore, I will raise my score to 7. | Summary: The authors recast virtual screening as an information retrieval problem: by learning appropriate representations of both proteins and molecules, and taking contrastive loss based on binding affinity between protein-molecule pairs, the aim is to learn a model where protein queries passed through one encoder can be used to retrieve a molecule from a large-scale library. The design of the model allows training data from a range of sources such as PDBBind, but also ChEMBL and BioLip, as well as augmentation with protein structures based on protein homology across evolution.
The method is benchmarked using DUD-E and LIT-PCBA, and by human evaluation of comparative examples from Glide, a commercial docking system. The method is demonstrated to outperform most finetuned deep learning methods on the benchmarks and produce better binding molecule sets than Glide in 80% of cases, as judged by human experts.
Strengths: The authors suggest a good method for obtaining aligned representations for proteins and molecules, where the alignment is induced by protein binding affinity, using a contrastive loss. The representations of molecules are undertaken in 3D which is still not the default in this field, despite its clear importance in protein-affinity, and uses a biologically-plausible data augmentation.
The results in the DUD-E benchmark are strong, particularly in a zero-shot setting, significantly out-performing other methods. Results on LIT-PCBA are also strong, with enrichment factors far above the performance achieved with commercial docking software. The results obtained on the time taken to virtually screen large libraries are impressive -- there are clear advantages over commercial docking software, and machine-learned scoring functions.
In addition, the clear ablation studies and human evaluation add nice details to the work. Overall this work is a very strong addition to the techniques available for screening and it will be exciting to see it in use.
Weaknesses: There are no significant weaknesses in this paper.
Technical Quality: 4 excellent
Clarity: 3 good
Questions for Authors: No questions
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 4 excellent
Presentation: 3 good
Contribution: 4 excellent
Limitations: No limitations
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ## Response to Reviewer cZpn
We extend our heartfelt gratitude to the esteemed reviewer for their generous appraisal of our work and for awarding high scores. Your positive feedback and acknowledgment of our efforts are deeply appreciated. We are thrilled that our research has met your expectations, and we sincerely thank you for taking the time to review our submission. Your encouragement motivates us to continue striving for excellence in our endeavors.
---
Rebuttal 2:
Title: Respond to authors' rebuttal
Comment: Please, look at the authors' rebuttal and the other reviewers' comments and indicate if you would like to change anything in your review. | Summary: The authors proposed a contrastive learning framework, DrugCLIP, for the drug virtual screening task which identifies potential drugs from vast compound databases to bind with a particular protein pocket. It reformulates virtual screening as a dense retrieval task and employs contrastive learning to align representations of binding protein pockets and molecules from a large quantity of pairwise data without explicit binding-affinity scores. Specifically, the framework computes a contrastive loss between two separate pre-trained encoders to maximize the similarity between a protein-molecule pair which can binding together and minimize it otherwise. Besides, the authors introduce a biological-knowledge inspired augmentation method, HomoAug, which creates protein-molecule pairs based on protein homology evolutions. Experiments on two challenging virtual screening benchmarks, demonstrate that zero-shot performance of this model surpasses most deep learning baselines that carefully finetune on labeled data.
Strengths: 1. The proposed DrugCLIP framework reformulates virtual screening as a dense retrieval task and employs contrastive learning to align representations of binding protein pockets and molecules from a large quantity of pairwise data, which provides researchers a new perspective for virtual screening.
2. The designed contrastive loss relieve the dependency on explicit labeling of binding affinity, and facilitates the usage of large-scale unlabeled data beyond densely annotated small datasets (such as PDBBind).
3. The dense retrieval characteristic of DrugCLIP brings high efficiency to online inference and promising high-throughput virtual screening on billions of molecules.
4. A biological-knowledge inspired augmentation method named HomoAug are proposed, which creates protein-molecule pairs based on protein homology evolutions. The shortage of public data is indeed obvious in drug discovery, and this augmentation method may help alleviate it.
5. The organization of this paper is very clear and easy to understand.
Weaknesses: 1. The main contribution of this paper is to directly apply the contrastive learning CLIP to the virtual screening scenario, which is maybe insufficient to support a poster published in NeurIPS.
2. The proof of Proposition 1 seems insufficient to support training-test consistency. Firstly, the model ability of f_theta and k_theta is ignored. The docking methods (k_theta here) can directly capture protein-ligand interactions and maybe more natural to adapt to conformation perturbations. Secondly, the binding conformations of ligands are different from their free states probably, and thus the optimal Rotation R and translation t cannot be found. Thirdly, many SOTA docking methods employ two-tower frameworks, such as Equibind, TANKBind, DiffDock, etc.
3. The model lacks further experiments on the screening power compared to docking methods in CASF-2016 which is an important benchmark for in silico drug discovery.
4. Lack of interpretability. Although this model demonstrates enhanced effectiveness and efficiency, it falls short in terms of interpretability compared to docking methods, as the authors mentioned in the appendix. These conventional approaches offer visualizations that elucidate the binding mechanism between a pocket and a molecule, which is intuitive and reliable for chemistry researchers to check the rationality of receptor-ligand interactions and modify the molecule structures.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: The questions are basically mentioned in the “weaknesses” part above.
1. The proof of Proposition 1 seems insufficient to support training-test consistency. Firstly, the model ability of f_theta and k_theta is ignored. The docking methods (k_theta here) can directly capture protein-ligand interactions and maybe more natural to adapt to conformation perturbations. Secondly, the binding conformations of ligands are different from their free states probably, and thus the optimal Rotation R and translation t cannot be found. Thirdly, many SOTA docking methods employ two-tower frameworks, such as Equibind, TANKBind, DiffDock, etc.
2. The model lacks further experiments on the screening power compared to docking methods in CASF-2016 which is an important benchmark for in silico drug discovery.
3. Lack of interpretability. Although this model demonstrates enhanced effectiveness and efficiency, it falls short in terms of interpretability compared to docking methods, as the authors mentioned in the appendix. These conventional approaches offer visualizations that elucidate the binding mechanism between a pocket and a molecule, which is intuitive and reliable for chemistry researchers to check the rationality of receptor-ligand interactions and modify the molecule structures. Try to give some correlation analysis and demonstrate some interaction patterns learned by DrugCLIP and not just show the performance. That would be more convincing for biochemistry scientists.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: Lack of interpretability. Although this model demonstrates enhanced effectiveness and efficiency, it falls short in terms of interpretability compared to docking methods, as the authors mentioned in the appendix.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ## Response to Reviewer 8ynk
We greatly appreciate your valuable feedback. We are committed to refining our work to enhance its quality and impact.
### Q1: Regarding our paper's contribution
Our paper's main contribution lies not merely in applying contrastive learning, specifically CLIP, to the virtual screening (VS) task and obtaining commendable results. Instead, our focus is on pioneering a new information retrieval paradigm for approaching tasks within the VS domain instead of the existing regression or classification views. This innovative method enables searching through billion-scale molecular libraries in several minutes while maintaining a high recall rate.
Upon transforming virtual screening (VS) into a similarity matching problem between proteins and molecules, it is crucial to highlight the non-trivial nature of the contrastive learning concept in this scenario. In the original context, images, and text serve as parallel mediums, each offering distinct perspectives about a shared subject. Therefore, it is natural for CLIP to utilize contrastive learning to learn this joint representation by differentiating similar ones from dissimilar ones. In contrast, the relationship between a drug and its target is non-parallel, highlighting a deep-seated binding relationship that underscores their intricate interplay. Given this, the core principles of contrastive learning as applied in the drug-target domain differ markedly from those in the text-image domain. Inspired by collaborative filtering and the latent factor model in the recommender system that the relationship between user and item could be formulated as a similarity function based on their latent representations, we propose to view protein and molecule analogically and aim to learn their latent representation by distinguishing positive and negative pairs in a contrastive learning manner. However, limited true negative binding protein and molecule pairs are provided in the field of virtual screening. Fortunately, if a specific protein-molecule pair has already been verified to exhibit a binding relationship, it is probable that they possess a negative binding relationship with other molecules/proteins. This observation can be incorporated as an in-batch sampling strategy. It is noteworthy that while the techniques employed in DrugCLIP bear resemblance to standard CLIP methods, the underlying motivation and principles differ significantly.
In terms of our other technical contributions, our paper introduces HomoAug, an innovative pocket augmentation method. The significance lies in its application to biological data, where data augmentation presents intricate challenges. We also explore advanced modeling techniques to meticulously capture the nuanced atomic interactions between the ligand and the pocket. You'll find a comprehensive exploration of these findings within the Appendix. While these contributions are noteworthy, they do not form the central focus of our paper. Therefore, we've deliberately chosen to reserve a detailed discussion of these aspects for the Appendix, indicating their potential as promising avenues for future research.
### Q2: The proof of Proposition 1 seems insufficient
Thank you for your feedback. Proposition 1 aims to show the superior robustness of the two-tower architecture employed in DrugCLIP compared to single-tower models when provided with inaccurate 3D structures as input. The inputs for both models are molecule conformations that have not been docked into the protein.
The key rationale behind the proof lies in the fact that deviation of the two-tower model is not reliant on inter-distances. We apologize for any confusion regarding the optimal R and t values; this term may not be as small as initially indicated. Nevertheless, the error of the single-tower model remains higher due to additional inter-distance deviation. Consequently, the single-tower model is more reliant on docking accuracy, whereas the dual-tower model can accommodate unbound structures as input.
Indeed, as you have indicated, there exist docking models that can be employed for screening. However, it should be emphasized that a prior study[1] showcased the inferior performance of blind docking approaches, such as Equibind, TANKBind, and Diffdock, when transitioning from blind docking to local docking. While these methods excelled in locating the pocket, they are not suitable for screening purposes.
[1] Yu et al, Do Deep Learning Models Really Outperform Traditional Approaches in Molecular Docking?,2023,arxiv
### Q3: Lack important benchmark CASF-2016
Thanks for providing a valuable benchmark. We assessed DrugCLIP and molecular docking methods using the CASF-2016 screening and target fishing tasks. The performances of the target fishing task are already in our Appendix, while the screening results are presented in the following Table.
| Method | top 1 | top 2 | top 3 | top 4 | top 5 |
|--------|-------|-------|-------|-------|-------|
| vina | 0.034 | 0.049 | 0.064 | 0.083 | 0.109 |
| glide | 0.195 | 0.270 | 0.360 | 0.386 | 0.416 |
| Ours | 0.259 | 0.411 | 0.500 | 0.581 | 0.637 |
It is pretty obvious that our method outperforms molecular docking by a large margin. We will include the above table in our revised paper.
### Q4: Lack of interpretability compared to docking
Thanks for your interest in the interpretability of our work. Predicting affinity with binding poses is crucial for drug discovery. We aim to augment, not replace, traditional docking, enabling larger chemical library screening. DrugCLIP suits a multi-step workflow, with subsequent docking or MD simulation for binding poses.
Considering the challenge of docking a billion-size library, our approach streamlines by selecting the top 1 million. Docking them is feasible, expanding the search from 1 million to a billion. DrugCLIP excels in multiple benchmarks, offering an edge in exploring a broader, accurate range of drug candidates.
---
Rebuttal 2:
Title: Respond to authors' rebuttal
Comment: Please, look at the authors' rebuttal and the other reviewers' comments and indicate if you would like to change anything in your review.
---
Rebuttal Comment 2.1:
Title: Reminder
Comment: A reminder of this. | Summary: ## DrugCLIP: Contrastive Protein-Molecule Representation Learning for Virtual Screening
The authors present DrugCLIP, a contrastive learning method for virtual screening. DrugCLIP is designed to align representations of small molecules and protein binding pockets. By using a contrastive learning framework, the authors are able to use data without associated affinity measurements, which are relatively scarce. Additionally, the authors present a data augmentation technique based on identifying homologous proteins.
Strengths: The authors employ an interesting framing of virtual screening and achieve good performance across a small panel of benchmark datasets with improved speed compared to more expensive traditional methods. The authors compare their method to several existing approaches and perform additional analysis around the practical usability and behviour of the model.
Weaknesses: * Clarity and explanatory detail could be improved throughout
* The data augmentation method seems dubious. Whilst it clearly provides more data, selectivity is obviously a vital aspect of developing therapeutics. By using these augmentations it does not seem to this reviewer that the ability of the model to determine selective binding is well-served and really serves to smooth what is fundamentally not a smooth function due to cliff effects.
* It would be interesting to explore performance on benchmarks of highly related targets and ligands, such as the KIBA and Davis Kinase inhibitor datasets.
### Minor Comments
* Title: Typo in "Contrasive"
* This reviewer disagrees with the notion that the key issue is to identify which molecules bind to a particular target. While this is an important subproblem, it fails to address downstream problems. Co-crystallised structures and affinity measurements will still be required in experimental campaigns, even if we had a reliable binder identification method. These are the time-consuming and expensive steps. Docking and affinity prediction methods address both implicitly address the binder identification problem, whilst also providing a proxy for one of these expensive measurements. While docking is time consuming, it produces a richer output: both pose and score.
* L20-22 This is clearly true but perhaps lacks some nuance. Library composition must also have a significant impact.
* The authors state to the best of their knowledge this is the first information retrieval based framing of the virtual screening. I think this claim holds water but there does exist very similarly motivated prior work on a related application that should be discussed and referenced for completeness [1].
* L71 value judgments such as "Innovative" and "superb" (L73) should be avoided
* L127 the authors should be explicit about the type of noise added in the corruption processes.
* L197 it would provide better clarity if the authors could use technical terms such as apo and holo
* It would be helpful if the authors could be more descriptive about the noising procedure using RDKit. The authors refer to simulation but it is not immediately clear on what this is in reference to.
[1] https://www.biorxiv.org/content/10.1101/2022.04.26.489341v1.abstract
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: * How are ligands combined with the pocket in the augmented dataset?*
* Do the authors perform any pre-processing of the structures, such as relaxation?
* I'm a little confused by the description of SE(3) Equivariance. The task is suited to invariance and the input appears to be invariant (based on discussion of distance-based encodings of structure).
* Masking atom types -> Is this masking the element type or the atomic identifier (e.g. the 37 standard atoms in Proteins)
* Do the authors apply any quality thresholding (e.g. based on pLDDT) for the data augmentation technique?
* I would appreciate if if the authors could expand on the purpose of adding the central node ([CLS])?
* Why do the authors filter out proteins with only one known pocket in the ChEMBL dataset?
* Why do the authors not report BEDROC on DUD-E Finetune?
* The authors state they remove all targets present in DUD-E from the training data. How exactly is this performed?
* For the human evaluation, it would be interesting to present docked structures to the experts as well as the molecular structures.
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 2 fair
Limitations: Technical weaknesses discussed above. No ethical concerns.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ## Response to Reviewer Bb5s
We sincerely thank the reviewer for your valuable advice. We will address the typos and word usage issues as you have pointed out. Regarding the questions about implementation details, we have included them in the global response.
### Q: L20-22: library composition also has a significant impact
Agreed, library composition significantly impacts virtual screening. Recent research indicates that ultra-large libraries not only offer improved hit quality due to their scale but also due to their reduced bias and lesser prior exploration, in contrast to smaller libraries. In the original paper, we prioritized efficient virtual screening with a large library and didn't fully delve into the composition-size relationship. This will be discussed in the revised version.
### Q: There exists prior work using information retrieval for VS
Thanks for pointing out this work that treats virtual screening as pocket matching. While it proposes pocket pretraining to find ligands for similar pockets, our approach is different. We directly align 3D representations of unbound molecule conformations and protein structures, without needing explicit pocket and ligand binding info. We'll cite this paper in our revised version and clarify the differences, emphasizing our unique contributions.
### Q: Lack of binding structures and affinity measurements compared to docking
We agree that complex structures and affinity measurements are essential. Rather than replace conventional docking, DrugCLIP augments the process by allowing the screening of a larger chemical library. Our method is suitable for a multi-step workflow, with subsequent docking or MD simulation. Considering the challenge of docking billion-size libraries, our method streamlines selection, enabling researchers to conduct docking within a feasible timeframe. It expands the search space and outperforms other methods both in efficiency and accuracy.
### Q: Confusion of SE(3) Equivariance
We apologize for not explaining SE(3) equivariance in our paper. Our model uses SE(3) equivariant EGNN heads for the pretraining task involving denoising coordinates. However, in the screening task, we exclusively utilize invariant representations and do not employ the equivariant coordinates. We'll update our description to clarify this.
### Q: More experiments on the metric of BEDROC on DUD-E
We didn't report the BEDROC results for the finetuning experiment because they were absent in the baselines, and the lack of code hindered our ability to obtain them. Nevertheless, our method attained a notable BEDROC result of 71.71 on the DUD-E dataset.
### Q: The implementation details of removing overlap targets
In this paper, we adopted a straightforward approach to address the issue of overlapped protein targets in the training set by removing the corresponding pdbid, following other baselines in our comparison. We further explored using sequence identity at 90% thresholds to mitigate similar targets. The results are detailed in the following table.
Table 1. Results on DUD-E with strict overlap-removing technique
| Method | AUC | BEDROC | EF @0.5% | EF @1% | EF @5% |
|--------|-------|--------|----------|--------|--------|
| Glide(docking) | 76.7 | 40.7 | 19.39 | 16.18 | 7.23 |
| Vina(docking) | 71.6 | N/A | 9.13 | 7.32 | 4.44 |
| Planet(ML) | 71.6 | N/A | 10.23 | 8.83 | 5.40 |
| ours | 79.07 | 40.56 | 29.51 | 25.14 | 9.34 |
We can see that our method still outperforms all the ml-based methods and docking-based methods even if we used more strict overlap-removing techniques, indicating our method's great capacity in the virtual screening task.
### Q: Docked structures are needed for human evaluation
Docked structures are crucial for human evaluation. Expert evaluators in our experiments are skilled in docking and can use their preferred tools. This ensures evaluation reflects their expertise. Therefore, we didn't provide pre-docked structures. It is worth mentioning that, to our knowledge, most human experts have used computational tools like AutoDock and the Schrodinger suite in the evaluation process. This explanation will be added to our revised paper.
### Q: Experiments that explore performance on benchmarks of highly related targets and ligands (Kinase)
First and foremost, we extend our gratitude for drawing attention to the selectivity issue. It's true that data augmentation for highly similar pockets might bring noise into the training data, posing challenges in distinguishing similar pockets. Yet, our testing on kinase datasets, which demand high selectivity, shows that this strategy doesn’t negatively impact the ability to identify the true kinase target among candidates, as reflected in top-k accuracy metrics.
We argue that large-scale pre-trained models can even benefit from partially noisy datasets, an observation supported by our experiments on PDBbind. The activity cliff issue mainly emerges when key amino acid positions mutate. We’ve mitigated this by using similarity filters, and even when the cliff effect occurs in some generated data, we believe the binding affinity of such weakened molecules still exceeds that of average negative samples, making it suitable for contrastive learning.
In summary, while selectivity is indeed a concern, our evidence from kinase datasets indicates that incorporating new, even somewhat noisy, datasets can enhance large-scale pre-trained models. We remain committed to refining our approach, always considering your invaluable feedback, to optimize the process of identifying selective binders.
Table 2, Selective Binding Prediction on Kinase Inhibitor Benchmark
| Method | top 10 acc | top 20 acc| top 30 acc| top 40 acc| top 50 acc|
|--------|--------|--------|--------|--------|--------|
| w/o aug| 0.165 | 0.273 | 0.349 | 0.419 | 0.481 |
| w aug | 0.169 | 0.273 | 0.355 | 0.429 | 0.496 |
---
Rebuttal Comment 1.1:
Comment: Many thanks to the authors for their response and incorporating my suggestions in their additional experiments .
Re table 1: why do the authors think their model suffers a performance hit upon a stricter thresholding (the other ML method, Planet, does not). The 90% threshold also seems much too high to me. Could the authors provide some justification or additional experiments at more stringent cutoffs? It seems the EF score are quite affected.
Re table 2: Which specific dataset is this evaluation performed on? I would like to be able to contextualise these results with what is reported in the literature.
---
Reply to Comment 1.1.1:
Comment: ## Response to Re Table1
Thank you for highlighting this crucial facet of evaluating machine learning techniques. We deeply regret that while we conducted supplementary experiments, we are constrained from sharing the outcomes during the current discussion phase. Consequently, we have augmented our explanation with more theoretical insights.
To begin with, we emphasize the prevalent performance challenges that afflict the majority of machine learning methods, as extensively discussed in preceding research. To elucidate, a comprehensive study by [1] meticulously assessed an array of machine learning methods, unveiling that **incorporating akin structures** within the training dataset notably **enhances performance**. Additionally, it was observed that training on a specific protein family and subsequently testing on **dissimilar proteins** consistently results in **subpar performance** [2].
Regarding methods like Planet, it is worth noting that empirical outcomes beyond utilizing a 90% threshold for testing were not evident. Therefore, **in alignment with the Planet paper**, we opted to mirror comparable configurations by employing MMSeqs2 to expunge homologous sequences from our training data. Furthermore, **our performance across all configurations consistently surpassed alternative machine learning methods**, an accomplishment that bears significance.
In terms of the machine learning aspect, contrastive learning offers a substantial advantage by ensuring that our method experiences **a lesser decline in performance compared to other supervised learning approaches**. Rather than directly modeling the relationship between protein-pocket pairs and ligand structures, contrastive learning focuses on extracting distinctions between strongly binding instances and negative samples. This distinction becomes evident in **the t-SNE visualizations from Section 4.4**, which underscore the potential pitfalls of utilizing conventional supervised learning techniques. These methods tend to yield representations that are notably imbalanced, and the clustering has proven that the **ML baselines memorize the pocket templates and ligand structures**. This outcome serves as compelling evidence that our approach adeptly captures the fused embeddings of both pockets and ligands. In contrast, competing methods fall short in this regard, resulting in a more pronounced deterioration of performance when faced with scenarios where similar pockets or ligands are excluded from the training set.
Within the framework of pretraining and fine-tuning, our methods exhibit heightened resilience in **capturing intricate pocket features**. This resilience arises from the fact that many proteins, despite sharing similar sequences, exhibit dissimilar pockets characterized by divergent atom-level structures [3], e.g. single mutant residue. Notably, our model undergoes pretraining to accurately predict atom types and precise atom coordinates. This inherent capability empowers our model to effectively discriminate between proteins that possess analogous sequences—a competence that surpasses approaches that merely extract protein embeddings on a more generalized level. This finer differentiation is pivotal in rendering our method more suitable for virtual screening.
[1] Su. et al, Tapping on the Black Box: How Is the Scoring Power of a Machine-Learning Scoring Function Dependent on the Training Set?, J. Chem. Inf. Model. 2020
[2] Wang. et al, Yuel: Improving the Generalizability of Structure-Free Compound–Protein Interaction Prediction, J. Chem. Inf. Model. 2022
[3] Davis et al, Comprehensive analysis of kinase inhibitor selectivity, Nature Biotechnology, 2011
## Response to Re Table2
We sincerely thank you for providing some valuable benchmarks in the review section.
Though KIBA and Davis Kinase inhibitor datasets are well-known kinase selectivity datasets, they are relatively out of date, as the number of kinase inhibitors grows dramatically in this decade. Furthermore, these datasets lack the crucial provision of associated structural information. This omission poses a challenge for structure-based methodologies like ours, which inherently rely on such data for effective application.
Therefore, we build a novel dataset that provides protein structures of the kinase and the molecule structures of the inhibitors. To assemble this dataset, we meticulously curated 154 kinase structures sourced from the KLIFS database (https://klifs.net/). Complementing this, we harnessed the power of data mining techniques to derive a collection of 9423 inhibitor structures from the ChEMBL database with reported bioactivity data. In our experiments, the goal is to identify the correct kinase within the top 1/5/10 ranked pocket structures for the given inhibitor structure input.
In summary, A new test dataset is built by collecting kinase structures from the KLIFS database and inhibitors from the ChEMBL.
---
Rebuttal 2:
Title: Respond to authors' rebuttal
Comment: Please, look at the authors' rebuttal and the other reviewers' comments and indicate if you would like to change anything in your review. | Rebuttal 1:
Rebuttal: We truly appreciate the reviewers' time in reviewing our project, and we will incorporate the suggestions to revise our paper thoroughly. In the global responses, we want to provide explanations on some implementation details for a better understanding of our methodology.
### Q: Explanation (atom type and coordinates) of the denoising pretraining tasks
We apologize for any confusion caused. As pretraining is not our main focus, we haven't provided detailed information. To clarify, we mask the atom identifier, not the element type, and add uniform noises of [-1, 1] to 15% of atom coordinates. Additional pair-distance prediction heads estimate uncorrupted distances, and the SE(3)-equivariant head directly predicts correct coordinates. These pretraining tasks enable effective learning from large-scale data. We will include more details in the Appendix of our revised paper. Thank you for your feedback.
### Q: A descriptive statement about the noising procedure using RDKit
The ETKDG algorithm in RDkit selects an initial conformation and builds a tree of low-energy conformers in multi-dimensional space. It adds new conformations, explores torsional freedom, and optimizes geometries to minimize energy, transforming the binding conformations to free conformations.
### Q: How are ligands combined with the pocket in the augmented dataset?
Our data augmentation technique hypothesizes that ligands can bind to homologous proteins. Besides, DrugCLIP utilizes a two-tower model, negating the requirement to locate binding poses; instead, RDKit-sampled conformations are used in the augmented dataset. Further details will be provided in the revised paper.
### Q: Pre-processing of the protein structures
Relaxation and adding partial charges are common preprocessing for docking methods. However, our deep-learning-based method is insensitive to occasional inaccuracies in coordinates and can be trained with raw element types. Therefore, only minimal cleaning-ups are performed to remove irrelevant molecules like water.
### Q: Quality thresholding for the data augmentation
We selected structures with globalMetricValue >= 70 and fractionPlddtConfident + fractionPlddtVeryHigh >= 0.9 on Google Cloud BigQuery. Proteins meeting these criteria were clustered to a 50% identity level using mmseqs2. More details are in included the Appendix.
### Q: Explanation of the purpose of adding the CLS node
In the molecule pretraining model, the central node (CLS) serves as an embedding for the entire input molecular structure, acting as a global aggregator. It captures essential information from the whole molecule, a common practice in molecular pretraining methods, and enables the model to consider the entire structure in downstream tasks.
### Q: The reason to filter proteins with only one known pocket in ChEMBL
Our model focuses on cases where the protein structure's pocket is clearly defined. When proteins have multiple pockets, uncertainties about the exact binding pocket can arise without a complex structure. For clarity and consistency, we only include proteins with a single identified pocket in our dataset. | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Unsupervised Learning for Solving the Travelling Salesman Problem | Accept (poster) | Summary: This paper proposes UTSP, an unsupervised learning framework, to solve the Travelling Salesman Problem (TSP). It consists of two phases, including heat map construction and local searching. Specifically, a surrogate loss function is proposed to train the GNNs, encouraging the model to find the shortest path and respect the TSP constraint implicitly. The built heat map helps reduce the search space and facilitate the local search afterwards. Empirically, with only ~10\% model parameters and ~0.2\% training samples, UTSP is competitive with or outperforms baselines in terms of solution quality and inference time. The analyses about search space and smoothing effect are also conducted.
Strengths: * The proposed method (in Section 2) is sound and novel. The theoretical proof (in Appendix) is correct.
* Technically, the proposed method significantly reduces the model parameters and training samples. It does not need labeled data, and avoids the spare reward problem, making it more attractive to the SL and RL-based methods.
* The empirical analyses of the non-smoothing heat map and reduced search space are interesting.
* The source code is provided.
Weaknesses: * The scope of this paper may be limited, since:
* The proposed method seems to be only applicable to TSP.
* Much domain knowledge is involved in the local search process.
* Too many hyperparameters in the proposed method.
* The review of related work is *too limited*. Many recent works regarding neural methods for solving TSP are missing. It is suggested to add the related work section in Appendix.
* The presentation of local search (in Section 3) is not clear. Could you illustrate the process using a figure?
* The theory of the proposed method in Section 2 is quite interesting. However, there seems to be a gap between the theory and the proposed method. For example, the theory is based on the assumption that only one value 1 and $n-1$ value 0 in each row and column of $\mathbb{T}$. However, the objective function does not exert an explicit constraint on the discreteness of $\mathbb{T}$ or $\mathcal{H}$. Based on the empirical result in Figure 5, the maximum value of $\mathcal{H}$ is around 0.175. Have you considered adding a regularization term to achieve this objective?
* Insufficient Evaluation:
* *Lack of Baselines:* Several recent end-to-end neural methods needed to be compared, including POMO [1], DIMES [2], and DIFUSCO [3].
* *Larger Size:* The evaluation of this paper is conducted on TSP1000 (maximally), while recent works [2, 3, 4] (also heat map based methods) consider TSP10000. It is interesting to see how your method performs on large-scale instances.
* *Lack of Benchmark Results:* Results on the classical TSP benchmark dataset (i.e. TSPLIB [5]) are needed.
* Minor:
* The citation format is not correct. The authors could use ~\cite{}.
* In line 57, "does not" -> "our method does not".
* In line 182, "me thods" -> "methods".
* Please change the blue and red colors in Fig. 7 and 8. The current version is indistinguishable.
* In line 414, the formulation should be $\mathcal{H}\_{i,i}=\sum\_{k=1}^{n}\mathbb{T}\_{i,k}\mathbb{T}_{i,k+1(\text{mod }n)}=0$.
* In the caption of Figure 9, add $q_{n-1}=1$.
* Remove the Checklist on Page 17.
[1] POMO: Policy optimization with multiple optima for reinforcement learning. In NeurIPS 2020.
[2] DIMES: A differentiable meta solver for combinatorial optimization problems. In NeurIPS 2022.
[3] DIFUSCO: Graph-based diffusion solvers for combinatorial optimization. In arXiv:2302.08224.
[4] Generalize a small pre-trained model to arbitrarily large tsp instances. In AAAI 2021.
[5] http://comopt.ifi.uni-heidelberg.de/software/TSPLIB95
----
**Overall**, despite several limitations, I lean towards borderline acceptance since I like the idea of this paper. It would be a good submission if the authors resolve most of the concerns.
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: * In line 72, why the weight matrix is set as $W_{i,j}=e^{-D_{i,j}/\tau}$? I do not see any explanation.
* In line 80, why do you use column-wise Softmax? How about row-wise Softmax, together with the column-wise constraint in Eq. (2)?
* Could you add more details (e.,g., model structure) about the Scattering Attention GNN (SAG) in Appendix? Currently, it is unclear how to obtain $\mathbb{T}$ using $W$ and $F$.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 3 good
Limitations: It seems that the authors did not discuss the limitations of this work.
No negative societal impact.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Q**: The review of related work is too limited. ... It is suggested to add the related work section in Appendix.
A: We add a new section, refer to the "replies to all reviewers", where we include the recent work of [1][2][3][4].
**Q**: The presentation of local search (in Section 3) is not clear. Could you illustrate the process using a figure?
A: We will add a figure (refer to the figure in the rebuttal one page pdf).
**Q**: The theory of the proposed method in Section 2 is quite interesting….Have you considered adding a regularization term to achieve this objective?
A: In our model, the “smoothness” of the signal is implicitly encoded in the GNN part. We use Scattering Attention GNN (SAG), SAG has both low-pass and band-pass filters. We follow the setting in [Min et.al 2022], where the authors use three low pass filters and three band-pass filters (refer to the figure in the rebuttal one page pdf). The number of band-pass filters determines how ‘discrete’ the signals are. When we exclude band-pass filters (setting them to 0), the GNN only consists of 3 low-pass filters, leading to the occurrence of oversmoothing. Consequently, this results in a heat map similar to Figure 5 left, with most elements in $H$ approximately equal to 0.01.
Upon adding 3 band-pass filters, the maximum value of $H$ increases to around 0.175. This indicates that incorporating band-pass filters enhances the discreteness of $H$, contributing to its higher values and a more distinguishable pattern.
Overall, the regularization is integrated into the Scattering Attention GNN model. During our experiment, we observed that increasing the number of band-pass filters (or reducing the number of low-pass filters) results in a higher maximum value for $H$. Further elaboration on this observation will be included in the appendix for additional discussion.
[Min et.al 2022] Min, Yimeng, et al. "Can Hybrid Geometric Scattering Networks Help Solve the Maximum Clique Problem?." Advances in Neural Information Processing Systems 35 (2022): 22713-22724.
**Q**: Lack of Baselines:
A: We compare our model with several recent end-to-end neural methods, including POMO [1], DIMES [2], and DIFUSCO [3] , referring to the official comment.
**Q**: Larger Size
A: Regarding TSP-10000, It is important to highlight that for the TSP-10000 evaluation, these studies [2, 3, 4] also use the same test dataset containing only 16 samples. Consequently, due to the limited size of the dataset, the performance results may not be a reliable indicator. By employing the graph sampling technique suggested in [4], we evaluate UTSP and achieve ~3.05% gap in approximately 1 hour. For reference, [4] reported a 4.3% gap in 21 minutes, DIMES achieved a 4.0% gap in 30 minutes, and 3.2% gap in 3.5 hours. DIFUSCO outperforms all others, as it reports the best performance with a 2.5% gap and a time cost of 47 minutes.
[1] POMO
[2] DIMES
[3] DIFUSCO:
[4] Fu et al.
**Q**: Lack of Benchmark Results:
TSPlib: The objective of this paper is to demonstrate the applicability of UL for the TSP. To assess our model's performance, we conducted evaluations on the same datasets used in [Fu et al., 2021], which serves as our primary baseline. We leave the exploration of TSPlib datasets to future research work.
**Q**: Minor:
A: We fixed the typos.
**Q**: In line 72, Why the weight matrix is set...
A: the temperature parameter is used to help build the adjacency matrix from $W$ the distance matrix $\mathcal{D}$, a low $\tau$ (close to zero) will make the graph less connected ( more sparse), while a higher one $\tau$ will make it more dense, we will add more discussion.
**Q**: In line 80, why do you use column-wise Softmax? How about row-wise Softmax, together with the column-wise constraint in Eq. (2)?
A: When we use row-wise Softmax together with the column-wise constraint, we find that the performance of our model is NOT as good as using column-wise Softmax + row-wise constraint.
To be specific, taking TSP 100 as an example, when using column-wise Softmax + row-wise constraint, we observed 780 fully covered instances in 1,000 validation samples, as shown in Figure 6 (Right). However, when using row-wise Softmax, together with the column-wise constraint in Eq. (2), we only observe 82 fully covered instances.
Why use column-wise Softmax? This is our motivation:
The heat map $H$ is based on $T$, Eq (1) can be written as:
$\mathcal{H}$ as:
$
\mathcal{H} = \sum_{t=1}^{n-1}p_t p^T_{t+1} + p_np_1^T,
$
where $p_t \in \mathbb{R}^{n \times 1}$ is the $t_{th}$ column of $\mathbb{T}$, $\mathbb{T} = [p_1|p_2|...|p_n]$.}
That is to say, $\mathcal{H}_{ij}$ is based on the columns of the $\mathbb{T}$ (see Figure 9 in appendix), thus we want to encourage a more distinguishable column representation (ideally n-1 vs 1) in each column of $\mathbb{T}$.
Now, in our model, we have two normalizations. The first one involves column-wise Softmax with row-wise constraints. The row-wise constraint is applied in a penalty form and can be seen as a standard normalization, where all outputs are divided by the sum of all outputs. Comparing this to the column-wise Softmax, the row-wise constraint (standard normalization) encourages a more spread-out representation, whereas the Softmax encourages a more distinguishable representation. Let’s take an example:
softmax([1,2]) = [0.2689, 0.7311]
standard normalization([1,2]) = [0.3333, 0.6666]
softmax([5,10]) = [0.0067, 0.9933]
standard normalization([5,10]) = [0.3333, 0.6666]
softmax([10,20]) = [4.542e-5, 0.9999]
standard normalization([10,20]) =[0.3333, 0.6666]
Since we aim to build a distinguishable column representation, we apply a softmax on the columns and put a row-wise penalty.
**Q**: Could you add more details (e.,g., model structure) ...
A: We add a figure about the SAG model in the one page pdf rebuttal. We will revise the appendix and add more details about the model.
---
Rebuttal Comment 1.1:
Comment: Dear Authors: Thanks for the responses and additional experiments. Before modifying my evaluation, the experimental comparison (e.g., POMO, Att-GCRN, DIMES, DIFUSCO, and UTSP trained on TSP100) on benchmark instances is needed. It could also evaluate the generalization capability of the proposed method. Please consider including large-scale benchmark instances as well.
---
Reply to Comment 1.1.1:
Title: additional experiments.
Comment: Thank you for you comments,
Please refer to the one page rebuttal .pdf file in the attachment, we add POMO, Att-GCRN, DIMES, DIFUSCO, and UTSP performance in the tables in the attachment.
For DIMES (https://arxiv.org/pdf/2210.04123.pdf), they authors only evaluate their model on TSP 500 and 1000.
For DIFUSCO (https://arxiv.org/pdf/2302.08224.pdf), they authors include their model on TSP 500 and 1000, DIFUSCO report their Gap on TSP 100 but they don't include the time cost.
Here, we compare our UL method with 2 RL methods (POMO, DIMES) and 2 SL (Att-GCRN, DIFUSCO) methods on TSP 100:
| Method | Type | Gap (\%) | Time |
| ----------- | -------------------- |----------- |----------- |
| POMO | RL | 0.14 | 1m |
| DIMES | RL+AS+MCTS | 0.22 | 42m |
| Att-GCRN | SL | 0.0096 | 3.94m + 5.25m |
| DIFUSCO | SL+MCTS | 0.0013 | 17m |
| UTSP | UL, Search | -0.0019 | 5.68m+ 5.21m |
*Please note that in the original DIFUSCO, the authors trained their model on 8 v100 GPUs. Here, we have trained both DIFUSCO and UTSP on a single v100 GPU. | Summary: The authors propose a continuous relaxation of the 2D Euclidian Travelling Salesman Problem. Specifically, they replace the combinatorial optimization problem with a continuous optimization problem of the form
*Minimize f(A) over all column-stochastic matrices $A \in \mathbb R^{n \times n}$*
where $f$ is a smooth function and $A$ is a soft indicator matrix whose $t^{th}$ column indicate the city visited at step $t$ of the tour. The matrix $A$ is obtained via a deep net $A = \phi_\theta(X)$ where $X$ contains the position of the cities, therefore leading to a problem of the form
*Minimize $f(\phi_\theta(X))$ over $\theta$*
which is optimized via SGD. From the soft indicator matrix $A$, the authors build a hard indicator matrix by performing a local search algorithm.
The proposed approach significantly differs from RL approaches, since it is based on optimizing a smooth objective. It is also significantly different from imitation learning approaches, in which a neural network is trained to imitate the output of an accurate but slow solver (typically the Concorde algorithm, which is more or less the default TSP solver in the OR community).
The proposed approach outperforms previous RL approaches and imitation learning approaches.
Strengths: * The proposed approach is simple and outperforms previous deep learning methods when applied to large TSP instance.
* The accuracy and sample efficiency achieved by the proposed approach are *very* impressive.
* The TSP problem is an important and prototypical combinatorial optimization problem. The proposed approach, due to its simplicity and performance, has the potential of becoming quite influential.
Weaknesses: 1. A more extensive discussion of the existing literature is needed. In particular, the authors should explain the method presented in [1], as it is the only one that achieves results comparable to the proposed approach. I would recommend to have a `related work' subsection.
2. The presentation could be improved. In particular I find that referring to the matrix $T$ as a transition matrix is very confusing. $H$ is a transition matrix, since $H_{ij}$ is to some extent the probability of moving from i to j. If I understand correctly the proposed relaxation, the $t^{th}$ column of $T$ is a probability distribution over the cities that indicates what is the city most likely to be visited at step $t$ of the tour. So I would refer to $T$ as a soft indicator matrix. The left side of Figure 3 is confusing: I take it as meaning that if $T$ is used as a transition matrix, then it leads to a non-Hamiltonian cycle. I think it is better to not encourage the reader to think of $T$ as a transition matrix. If $T=[p_1, \ldots, p_n]$ is viewed as a soft indicator matrix that indicates the city visited at time $t$, then the formula for $H$ can be written as a sum of outer products\
$
\displaystyle H = \sum_{t=1}^{n-1} p_{t} p_{t+1}^T + p_{n} p_{1}^T
$\
from which it is clear that $H$ is a transition matrix. I think equation (1) and the displayed matrix $V$ are unnecessary and take lots space. Similarly, figure 1, 2 and 3 take lots of space and do not bring much clarity in my opinion. I think the space could be better used by describing related works in more depth, adding context for the experimental results (which are very impressive), and also maybe discussing continuous relaxations in general. Overall, the proposed strategy is very simple (which I see as a strength): I think it could be presented in a more concise and cleaner manner.
3. The experimental section could be polished. I think it is important to explain what is the "gap" and the "rounding problem". (Also line 177 is hard to read, and the first paragraph is not well written.)
[1] Zhang-Hua Fu, Kai-Bin Qiu, and Hongyuan Zha. Generalize a small pre-trained model to arbitrarily
325 large tsp instances.
Technical Quality: 4 excellent
Clarity: 2 fair
Questions for Authors: Question1: On line 198 the authors write
> We remark that the UTSP takes a shorter total running time (inference + search) and outperform the existing learning baselines on these large instance.
I am confused: the algorithm from [1] seems faster. Is it not a learning baseline? Again, I think more discussion about [1] is needed.
Question 2: The GCN from Kipf and Welling is the `smoothest' GNN. Did you try other GNN/transformer architecture beyond SAG?
[1] Zhang-Hua Fu, Kai-Bin Qiu, and Hongyuan Zha. Generalize a small pre-trained model to arbitrarily
325 large tsp instances.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 2 fair
Contribution: 4 excellent
Limitations: I did not find a paragraph about the limitation of the proposed approach. Overall this is a very strong work in my opinion. An honest explanation of the shortcomings of the proposed approach would improve the scientific contribution.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: *We update the experiment section in the one page rebuttal pdf.*
**Q**: A more extensive discussion of the existing literature is needed. In particular, the authors should explain the method presented in [1], as it is the only one that achieves results comparable to the proposed approach. I would recommend to have a `related work' subsection.
A: We will revise the introduction part and add a new section, refer to the "replies to all reviewers".
**Q**: The presentation could be improved. In particular I find that referring to the matrix... I take it as meaning that if $\mathbb{T}$ is used as a transition matrix, then it leads to a non-Hamiltonian cycle.
A: **I take it as meaning that if $\mathbb{T}$ is used as a transition matrix, then it leads to a non-Hamiltonian cycle.** This is correct!
We will change the transition matrix to soft indicator matrix, we rewrite $\mathcal{H}$ as:
$
\mathcal{H} = \sum_{t=1}^{n-1}p_t p^T_{t+1} + p_np_1^T,
$
where $p_t \in \mathbb{R}^{n \times 1}$ is the $t_{th}$ column of $\mathbb{T}$, $\mathbb{T} = [p_1|p_2|...|p_n]$.}
We remove $\mathbb{V}$, we will remove some figures (to the appendix) and add more discussion on related works.
**Q**: The experimental section could be polished. I think it is important to explain what is the "gap" and the "rounding problem".
A: The gap: given a length and the optimal length $l_{opt}$, the gap is defined as:
$(l-l_{opt})/l_{opt}$.
Rounding problem: On many instances, the best known solutions reported by Concorde are not strictly optimal (confirmed in (Joshi et al., 2019), possibly due to round-off reasons), which could be slightly improved (< 10−2 ) by our algorithm (Fu et al., 2020).
We will revise the experimental section and add above discussions.
**Q**: line 177 is hard to read:
A: When we apply the T → H transformation and utilize H as the heat map, we observe that H forms a Hamiltonian Cycle. By minimizing the expression $\sum_{i=1}^{n} \sum_{j=1}^{n} D_{i,j} \cdot H_{i,j}$, we can achieve the shortest Hamiltonian Cycle, which represents the solution to the Traveling Salesman Problem (TSP).
**Q**: first paragraph is not well written:
A: As mentioned above, we will revise the introduction part and add a related works section.
Joshi et al., 2019: Chaitanya K Joshi, Thomas Laurent, and Xavier Bresson. An efficient graph convolutional network technique for the travelling salesman problem. arXiv preprint arXiv:1906.01227, 2019.
Fu et al., 2020: Targeted sampling of enlarged neighborhood via Monte Carlo tree search for TSP
**Q**: On line 198 the authors write
We remark that the UTSP takes a shorter ...on these large instance...the algorithm from [1] seems faster. Is it not a learning baseline? Again, I think more discussion about [1] is needed.
A: We will add more discussion on [1], Here we mean “large instances”like TSP 500 and 1000. As the problem size increases increases, our method performs better and take have a shorter running time, as summarized here (there is a typo is the TSP 500 column in the original manuscript, which overlap with the TSP 200, we update the performance in the one page pdf rebuttal):
Results of SAG + Local Search w.r.t. existing baselines. We evaluate the models on 128 TSP 500 instances and 128 TSP 1000 instances.
| Method | Type | TSP500 (Length) | TSP500 (Gap %) | TSP500 (Time) | TSP1000 (Length) | TSP1000 (Gap %) | TSP1000 (Time) |
|--------|------|-----------------|----------------|---------------|------------------|-----------------|---------------|
| Concorde | Solver | 16.5458 | 0.0000 | 37.66m | 23.1182 | 0.0000 | 6.65h |
| Gurobi | Solver | 16.5171 | -0.1733 | 45.63h | - | - | - |
| LKH3 | Heuristic | 16.5463 | 0.0029 | 11.41m | 23.1190 | 0.0036 | 38.09m |
| GAT \cite{deudon2018learning} | RL, S | 28.6291 | 73.0293 | 20.18m | 50.3018 | 117.5860 | 37.07m |
| GAT \cite{deudon2018learning} | RL, S 2-OPT | 23.7546 | 43.5687 | 57.76m | 47.7291 | 106.4575 | 5.39h |
| GAT \cite{kool2018attention} | RL, S | 22.6409 | 36.8382 | 15.64m | 42.8036 | 85.1519 | 63.97m |
| GAT \cite{kool2018attention} | RL, G | 20.0188 | 20.9902 | 1.51m | 31.1526 | 34.7539 | 3.18m |
| GAT \cite{kool2018attention} | RL, BS | 19.5283 | 18.0257 | 21.99m | 29.9048 | 29.2359 | 1.64h |
| GCN \cite{joshi2019efficient} | SL, G | 29.7173 | 79.6063 | 6.67m | 48.6151 | 110.2900 | 28.52m |
| GCN \cite{joshi2019efficient} | SL, BS | 30.3702 | 83.5523 | 38.02m | 51.2593 | 121.7278 | 51.67m |
| GCN \cite{joshi2019efficient} | SL, BS* | 30.4258 | 83.8883 | 30.62m | 51.0992 | 121.0357 | 3.23h |
| DIMES \cite{qiu2022dimes} | RL+S | 18.84 | 13.84 | 1.06m | 26.36 | 14.01 | 2.38m |
| DIMES \cite{qiu2022dimes} | RL+MCTS | 16.87 | 1.93 | 2.92m | 23.73 | 2.64 | 6.87m |
| DIMES \cite{qiu2022dimes} | RL+AS+MCTS | 16.84 | 1.76 | 2.15h | 23.69 | 2.46 | 4.62h |
| DIFUSCO \cite{sun2023difusco} |SL+MCTS | 16.63 | 0.46 | 10.13m | 23.39 | 1.17 | 24.47m |
| Att-GCRN\cite{fu2021generalize} | SL+RL MCTS | 16.7471 | 1.2169 | 31.17s + 3.33m | 23.5153 | 1.7179 | 43.94s + 6.68m |
| UTSP (Ours) | UL, Search | 16.6846 | 0.8394 | 1.37m + 1.33m | 23.3903 | 1.1770 | 3.35m + 2.67m |
We now add DIMES and DIFUSCO, on TSP 500 and 1000, the new baseline DIMES is able to take a shorter running time, but results in a larger gap, we will revise the this section.
**Q**: The GCN from Kipf and Welling is the `smoothest' GNN. Did you try other GNN/transformer architecture beyond SAG?
A: We tried a graph attention network, but we didn't observe a significant improvement compared to GCN. This could be due to the fact that graph attention networks still rely on averaging features of neighboring nodes for similarity calculations.
Developing specific GNNs targeted for solving TSP is a very important topic, and we believe that there exists more suitable GNN structures, we will leave it for future discussions.
**Q** limitation of the proposed approach.
A: we will add more discussion about limitation and future works.
---
Rebuttal Comment 1.1:
Comment: Thanks for the detailed answer. The added baseline DIFUSCO performs slightly better. Could you elaborate on this method (there is a single sentence in the added related work section). In particular, does it require TSP solutions computed by another algorithm such as Concorde? (i.e. is it trained on 'exact' TSP solutions)
---
Reply to Comment 1.1.1:
Title: Does DIFUSCO require TSP solutions computed by another algorithm such as Concorde?
Comment: Thank you for your comment,
Yes, DIFUSCO necessitates TSP solutions derived from an alternative algorithm like Concorde.
The authors mentioned that they use Concorde to generate the TSP solutions, see Section 4.1 in https://arxiv.org/pdf/2302.08224.pdf.
Please refer to the one page rebuttal .pdf file in the attachment,
As shown in the "Type" column, DIFUSCO is classified a supervised learning (SL) approach.
More elaboration on DIFUSCO will be added in the related works section.
---
Rebuttal Comment 1.2:
Comment: To me there is an important difference between algorithms that 'imitate Concorde' (that you refer to as SL) and algorithms that solve the TSP 'from scratch' (RL and unsupervised). The latter are more interesting because they are easier to adapt to new tasks. The former can only solve tasks for which a good combinatorial algorithm already exists; moreover, at best, they can only speed up the teacher algorithm. I think this point could be made stronger in the introduction.
---
Reply to Comment 1.2.1:
Comment: Thank you for your feedback. We will incorporate your suggestions into the manuscript and emphasize the significant difference between our unsupervised learning approach and other methods that rely on supervised/imitation learning.
Specifically, we will underline the fact that approaches like DIFUSCO and Fu et al. require a dataset of over **1 million** exactly solved TSP solutions for training on TSP 100.
Strengthening this point in the introduction will further clarify the novelty and the contribution of our unsupervised method. | Summary: This paper proposed a novel unsupervised learning method for solving the Travelling Salesman Problem (TSP). It employs a Scattering Attention GNN (SAG) to encode the node information. Then, the learned representation is transformed into a heatmap, which corresponds to the probability of an edge being included in the optimal solution. A novel unsupervised loss is proposed. Based on the learned heatmap, a local search procedure with randomness is used to derive the final solution. Experiments on TSP instances up to 1000 nodes show that the proposed method can use much less training data, and outperforms several existing deep TSP models.
Strengths: 1. The research motivation is clear.
2. The idea of using SAG to build heatmap, and the proposed unsupervised loss is interesting.
3. Strong empirical performance, with detailed analysis on the expressive power of SAG.
4. Generally good writing and organization.
Weaknesses: 1. Since the proposed method is heatmap based, I suggest to give a detailed review and discussion on heatmap based methods, such as [Fu2021], [Qiu2022], [Joshi2022]. Currently the introduction is mainly from the SL/RL perspective.
2. For fair comprison, I suggest to add some ablation studies on the effect of local search. In the main results (Tables 1-3), it is unclear whether the performance improvement is from the better learning mechanism, or the local search procedure. In addition, the baselines use sampling, beam search and Monte carlo tree search as the decoding strategies, while the proposed method uses local search. This also makes it difficult to compare the effectiveness of the learning mechanism. While Figure 4, 5 and 6 provide some insights, they are still indirect results.
3. The baselines are not up-to-date. It would be much stronger if state-of-the-art methods, such as DIMES [Qiu2022] (also a heatmap based method), could be compared.
4. It is unclear how the training data is generated and whether the baselines are retrained. I guess they are not since GAT [Kool2019] and GCN [Joshi2019] is known to be hard to scale to problem of size 1000.
5. More detailed results on the training efficiency (e.g., a table) could be helpful. Currently only an informal discussion is given on Page 6.
6. The local search mechanism relies on randomness. But how it impacts the results is unclear, including how stable the performance is (std is helpful if reported), and what if no randomness is used.
7. The runtime in Table 2 seems incorrect. For the baselines, most runtime for TSP200 is much faster than TSP100. For your approach, the runtime for TSP200 is faster than TSP50. Please thoroughly check the results.
8. The proposed approach is currently limited to TSP. It would be better to have some discussions on how to extend it to other typical routing problems such as CVRP.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Please see the above weaknesses.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The propossed approach is limited to TSP. But this is not a severe limitation since it obtains good results on large-scale problems. It would be better to have some discussions on how to generalize the idea to support other routing problems.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Please check the “replies to all reviewer” and the one page rebuttal pdf.
**Q** Since the proposed method is heatmap based, I suggest to give a detailed review and discussion on heatmap based methods, such as [Fu2021], [Qiu2022], [Joshi2022]. Currently the introduction is mainly from the SL/RL perspective.
A: We will add a related works section and will revise the introduction.
**Q** For fair comprison, I suggest to add some ablation studies on the effect of local search. In the main results (Tables 1-3), it is unclear whether the performance improvement is from the better learning mechanism, or the local search procedure. In addition, the baselines use sampling, beam search and Monte carlo tree search as the decoding strategies, while the proposed method uses local search. This also makes it difficult to compare the effectiveness of the learning mechanism. While Figure 4, 5 and 6 provide some insights, they are still indirect results.
A: We include ablation studies to examine how changes in the search hyperparameters affect performance in the one page rebuttal pdf. This is shown in "Table: Search Hyperparameter Ablation Study on TSP 100" and "Table: Search Hyperparameter Ablation Study on TSP 1000". A lower value of $\alpha$ indicates that the local search algorithm prioritizes edges with higher heat map values, whereas a higher value of $\alpha$ aligns more with an MCTS style, which is similar to the approach described in [Fu et al., 2021].
We also find a better performance on TSP 100, where the gap is -0.0019 %.
**Q** The baselines are not up-to-date. It would be much stronger if state-of-the-art methods, such as DIMES [Qiu2022] (also a heatmap based method), could be compared.
A: we add new baselines, refer to the one page rebuttal pdf.
**Q** It is unclear how the training data is generated and whether the baselines are retrained. I guess they are not since GAT [Kool2019] and GCN [Joshi2019] is known to be hard to scale to problem of size 1000.
A: Regarding the training data, our approach aligns with [Fu et al.]. The training data is randomly generated on a 2D plane, and we adopt the same test data set as outlined in their work.
For TSP instances with sizes 20, 50, and 100, the test data comprises 10,000 automatically generated 2D Euclidean TSP instances for each size. This test data set is widely utilized by existing learning-based algorithms.
On larger instances with n = 200, 500, and 1000, There are 128 instances for each size.
Regarding retrain:
See our manuscript, Line 395 to line 398: For UTSP (our method) and the state-of-the-art learning-based method Att-GCRN Fu et al. [2021], we run the search algorithm on **exactly the same environment (one Intel Xeon Gold 6326)** for a fair comparison.
And for other baselines, since GAT [Kool2019] and GCN [Joshi2019] have a noticeable gap from Fu et al. [2021], we directly refer to the results from Fu et al. [2021].
**Q**: More detailed results on the training efficiency (e.g., a table) could be helpful. Currently only an informal discussion is given on Page 6.
A: We will add more discussion in the new manuscript.
**Q**: The local search mechanism relies on randomness. But how it impacts the results is unclear, including how stable the performance is (std is helpful if reported), and what if no randomness is used.
A: Here we compare the performance using restart and without restart on TSP 100 and TSP 1000, all other hyperparameters are the same.
On TSP 100, using restart, the gap is -0.0019% and the std is 0.00035%.
On TSP 100, without restart, the gap is -0.0017% and the std is 0.00025%.
On TSP 1000, using restart, the gap is 1.1770% and the std is 0.075
On TSP 1000, without restart, the gap is 1.2163% and the std is 0.38.
The results indicate that introducing randomness (via restarts) helps enhance performance, and has the potential to eliminate the standard deviation in larger TSP instances, such as TSP 1000.
**Q**: The runtime in Table 2 seems incorrect. For the baselines, most runtime for TSP200 is much faster than TSP100. For your approach, the runtime for TSP200 is faster than TSP50. Please thoroughly check the results.
A: We evaluate our performance on the same test dataset used in [Fu et al.] and [Qiu et al.].
In that dataset, TSP instances with sizes 20, 50, and 100 have **10,000** test samples each.
However, for TSP instances with sizes 200, 500, and 1000, we have a smaller set of only **128** samples.
That’s why TSP200 is faster than TSP50 and TSP 100.
The new performance is included in the table one page rebuttal pdf.
**Q** The proposed approach is currently limited to TSP. It would be better to have some discussions on how to extend it to other typical routing problems such as CVRP.
A: A common aspect is that both CVRP and TSP share the objective of minimizing the 'distance'. As a result, we can employ expressions such as $\sum_{ij} H_{ij}D_{ij}$. We will add a future work section in our manuscript.
Reference:
Fu et al. Generalize a small pre-trained model to arbitrarily large tsp instances.
Proceedings of the AAAI Conference on Artificial Intelligence, 35(8):7474–7482, 2021
Qiu et al. Dimes:
Dimes: A differentiable meta solver for combinatorial optimization problems." Advances in Neural Information Processing Systems 35 (2022): 25531-25546.
---
Rebuttal Comment 1.1:
Comment: Thanks for the response. While it addressed most of my concerns, I am not quite satisfied with the ablation results on local search. What I meant is that, in Table 1-3 in the main paper and Table 1-2 in the rebuttal PDF, different methods are combinations of different learning mechanisms + search mechanisms. Taking DIMES as an example, it is RL + MCTS, while the proposed UTSP is UL + Search. Since the search mechanisms are different (MCTS vs local search), it is unclear whether the improvement over DIMES comes from the newly proposed learning mechanism, or the local search procedure.
Besides, I strongly recommend to have a detailed comparison on the training time with the SL/RL baselines (including label collection time for SL), so as to better justify the main motivation of this paper, i.e. the proposed UL method significantly reduces training cost and label requirements.
---
Reply to Comment 1.1.1:
Title: MCTS vs local search
Comment: Thank you for your clarification.
Here we use change our search to the MCTS and this is the result. We use the same time budget for running both MCTS and the search method.
In general, our search method outperform MCTS, albeit not by a substantial margin. And we can see that when using MCTS, our UL method still outperforms DIMES, so the improvement over DIMES comes from the newly proposed learning mechanism.
| Method | Type | TSP20 Length | TSP20 Gap (%) | TSP20 Time | TSP50 Length | TSP50 Gap (%) | TSP50 Time | TSP100 Length | TSP100 Gap (%) | TSP100 Time |
| ---------------------------------------------------- | --------------- | ------------ | ------------- | ---------- | ------------ | ------------- | ---------- | ------------- | -------------- | ----------- |
| Concorde | Solver | 3.8303 | 0.0000 | 2.31m | 5.6906 | 0.0000 | 13.68m | 7.7609 | 0.0000 | 1.04h |
| LKH3 | Heuristic | 3.8303 | 0.0000 | 20.96m | 5.6906 | 0.0008 | 26.65m | 7.7611 | 0.0026 | 49.96m |
| Att-GCRN | SL+RL+MCTS | 3.8300 | -0.0074 | 1.44m | 5.6908 | 0.0032 | 5.22m | 7.7616 | 0.0096 | 9.19m |
| DIMES | RL+AS+MCTS | 3.8304 | 0.0026 | 2.4h | 5.6919 | 0.0232 | 10.3h | 7.7654 | 0.05772 | 32.5h |
| DIFUSCO | SL+MCTS | 3.8303 | 0.0012 | 3.34m | 5.6908 | 0.0029 | 10.13m | 7.7612 | 0.00386 | 19.15m |
| UTSP (ours) - MCTS | UL, MCTS | 3.8303 | -0.0012 | 1.67m | 5.6899 | -0.0123 | 3.94m | 7.7608 | -0.0007 | 10.89m |
| UTSP (ours) - Search | UL, Search | 3.8303 | -0.0009 | 1.67m | 5.6894 | -0.0200 | 3.94m | 7.7608 | -0.0019 | 10.89m |
| Method | Type | TSP200 Length | TSP200 Gap (%) | TSP200 Time | TSP500 Length | TSP500 Gap (%) | TSP500 Time | TSP1000 Length | TSP1000 Gap (%) | TSP1000 Time |
| --------------------------------- | -------------- | ------------- | -------------- | ----------- | ------------- | -------------- | ----------- | -------------- | --------------- | ------------ |
| Concorde | Solver | 10.7191 | 0.0000 | 3.44m | 16.5458 | 0.0000 | 37.66m | 23.1182 | 0.0000 | 6.65h |
| LKH3 | Heuristic | 10.7195 | 0.0040 | 2.01m | 16.5463 | 0.0029 | 11.41m | 23.1190 | 0.0036 | 38.09m |
| Att-GCRN | SL+RL+MCTS | 10.7358 | 0.1563 | 1.67m | 16.7471 | 1.2169 | 3.85m | 23.5153 | 1.7179 | 7.41m |
| DIMES | RL+AS+MCTS | 10.7403 | 0.1977 | 46m | 16.84 | 1.76 | 2.15h | 23.69 | 2.46 | 4.62h |
| DIFUSCO | SL+MCTS | 10.7521 | 0.3079 | 4.12m | 16.63 | 0.46 | 10.13m | 23.39 | 1.17 | 24.47m |
| UTSP (Ours) | UL, MCTS | 10.7312 | 0.1129 | 1.67m | 16.7026 | 0.9477 | 2.70m | 23.4729 | 1.5343 | 6.02m |
| UTSP (Ours) | UL, Search | 10.7289 | 0.0918 | 1.67m | 16.6846 | 0.8394 | 2.70m | 23.3903 | 1.1770 | 6.02m |
---
Reply to Comment 1.1.2:
Title: Total running time comparison
Comment: Thank you for your clarification.
Here, we are comparing the total time cost. Since Att-GCRN and DIFUSCO are supervised methods, generating the ground truth label is already a very time-consuming process, especially as the size grows larger. DIMES uses reinforcement learning (RL) and does not require ground truth labels, resulting in a smaller total time cost.
Our UL method has the smallest time cost.
| TSP SIZE | Att-GCRN (supervised) | DIFUSCO (supervised) | DIMES (reinforcement) | UTSP (unsupervised) |
| -------- | -------------------------- | ----------------------- | --------------------- | ------------------- |
| 20 | 3.8h (solver) + 12h | 5.8h (solver) + 5h | 2.9h | 27m |
| 50 | 22.8h (solver) + 25h | 34.3h (solver) + 6h | 11.1h | 36m |
| 100 | 22.8h (solver) + 25h | 156.2h (solver) + 8h | 33.5h | 55m |
| 200 | 22.8h (solver) + 25h | 33.5h (solver) + 9h | 2.5h | 1.2h |
| 500 | 22.8h (solver) + 25h | 190h (solver) + 13h | 3.6h | 1.8h |
| 1000 | 22.8h (solver) + 25h | 317h (solver) + 16h | 6.3h | 2.1h |
It's important to mention that for Att-GCRN, their training was performed on TSP 20 and 50, and they employed graph sampling to construct small heat maps and then merge them. Consequently, the time cost of TSP 100, 200, 500, and 1000 are the same.
For DIFUSCO, the original paper train the model with 8× NVIDIA Tesla V100 Volta GPUs, here we train on single one V100. However, since DIFUSCO is a supervised learning method, running the solver is already very time consuming. | Summary: The paper proposes an unsupervised learning-based heuristic to solve the Travelling Salesman Problem. The approach consists of two steps: first a GNN is trained using a surrogate loss to output a heatmap of the edges then the heatmap is used to guide a local search heuristic. The proposed model has significantly less parameters than similar previous methods and its training is very efficient. It is tested on TSP instances with up to 1000 nodes.
Strengths: 1. The paper introduces a light-weight model and sample-efficient training procedure to learn a TSP heuristic
1. Novel idea of a differentiable surrogate loss that encourages the solution to form a hamiltonian cycle
1. Nice discussion and motivation of the choice of the GNN and precise comparison between GCN and SAG
1. The paper is clear and easy to follow
Weaknesses: 1. The main weakness is the limited scope: the paper is very specialized to the TSP, esp. the proposed surrogate loss is specific to the TSP.
1. Missing some related work: e.g. [1] and [2] and the associated baselines in the experiments. These are known to be much stronger baselines than the ones presented in Tables 1-3.
1. Some unfounded claims:
* The paper says “Such SL models scale poorly to the big instances” while [Fu et al 2021] scales remarkably well to instances with up to 10,000.
* L198 “We remark that the UTSP takes a shorter total running time (inference + search) and outperform the existing learning baselines on these large instances. ” But the reported computation times are not shorter than Att-GCRN for TSP200 and TSP50.
* L216: “when using SL, the model learns from the TSP solutions, which fails when multiple solutions exist or the solutions are not optimal.” Is there any proof/reference to motivate this statement?
* L194 “On larger instances with n = 200, 500 and 1, 000, we notice that traditional solvers and heuristics (Concorde,Gurobi and LKH3) fail to generate the optimal solutions within reasonable time when the size of problems grows”. I’m not sure what is meant exactly, since we see for e.g. in Table 3 that LKH still gives the best optimality gap for TSP1000 in 38minutes. Then for Gurobi, I wonder what was the time limit. Since it is an exact solver, it is expected to be slow. However it should return the best solution it found when it reaches the time limit. Can the authors give more details about the results (or lack of) with Gurobi.
1. The columns TSP200 in Tab 2 and TSP500 in Tab 3 look exactly the same. Is it a typo?
1. Many components in the approach and there is only one kind of ablation for the choice of SAG versus GCN for the model architecture. It would be interesting to do precise ablations, esp. for the quality and role of the heatmap in the final results. For example by applying the same local search strategy with the heatmap obtained by other models such as the GCN approach of [Joshi et al 2019a] or the Att-GCRN of [Fu et al 2021].
1. Experiments only on synthetic instances of the same distribution as training. Could be interesting to test on unseen distributions, e.g. TSPlib.
[1] Qiu et al, DIMES: A Differentiable Meta Solver for Combinatorial Optimization Problems, NeurIPS 2022
[2] Kwon et al POMO: Policy Optimization with Multiple Optima for Reinforcement Learning, NeurIPS 2020
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: 1. The paper says “Such SL models scale poorly to the big instances” while the approach of [Fu et al 2021] scales remarkably well to instances with up to 10,000. Maybe the authors meant *training* such SL models does not scale?
1. Have the authors tried their model on even larger instances, such as TSP10000?
1. Would the approach apply to the non-Euclidian asymmetric version of the TSP?
1. It would be useful to discuss how the temperature parameter (L73) and the number M (of elements to keep in each row of the heatmap) are fixed. In particular, how is the search space reduction (Sec. 5) affected by the choice of M?
1. L156 “when selecting the city v given u , we only consider the cities from the candidate set of v” —> do you mean the candidate set of *u* instead?
1. “the negative values are the results of the rounding problem” can the authors elaborate on what is meant by the rounding problem?
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 2 fair
Presentation: 3 good
Contribution: 1 poor
Limitations: The limitations are not explicitly mentioned in the paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Q**: The main weakness is the limited scope: the paper is very specialized to the TSP
A: TSP stands as one of the 21 NP-complete problems outlined by Karp [Karp]. TSP holds a foundational position in the field of combinatorial optimization owing to its essence and practical utility. TSP is also fundamentally important within theoretical computer science and operations research domains. In this paper, we demonstrate that we can build unsupervised learning-based heuristics for the TSP, without the need for labeled ground truth solutions (supervised/imitation learning) or reliance on the framework of Reinforcement Learning (RL). We consider this to be a significant advancement.
[Karp] Karp, Richard M. (1972). "Reducibility Among Combinatorial Problems"
**Q**: Missing some related work:
A: we add new baselines and section, refer to the table in the one page rebuttal pdf.
**Q**: The paper says “Such SL models scale poorly to the big instances” while [Fu et al 2021] scales remarkably well to instances with up to 10,000.
A: In [Fu et al.], the model is trained on the small graph and only generates a small heat map, such as for 20 cities or 50 cities. They train the model (in a supervised manner) at a small scale. They apply a series of techniques, such as graph sampling, graph conversion, and merging heat maps. This means that when dealing with large instances such as TSP 200, 500, and 1000, their model always generates a heat map of size 50 by 50, and then the sub heat maps are merged together. Please refer to the Methodologies section in [Fu et al.] for more details.
**Q**: L198 “...not shorter than Att-GCRN for TSP200 and TSP50.
A: “On large instances”we mean TSP 500 and TSP 1000, refer to the table in the one page rebuttal pdf., we will revise this sentence and add more discussion on the performance.
**Q**: L216: “when using SL, ...Is there any proof/reference to motivate this statement?
A: In [Li et al.], Section 4.2
... when there are multiple optimal solutions for the same graph. ..., two equivalent optimal solutions that induce completely different labellings., …which is not a useful labelling.
We will add [Li et al.] into reference.
[Li et al.] "Combinatorial optimization with graph convolutional networks and guided tree search." Advances in neural information processing systems 31 (2018). https://arxiv.org/pdf/1810.10659.pdf
**Q**: L194 ...Can the authors give more details about the results (or lack of) with Gurobi.
A: On TSP 1000 we tried set time limit as LKH running time (38.09m), Gurobi returns a ~18% Gap.
**Q**: The columns TSP200 in Tab 2 and TSP500 in Tab 3 look exactly the same. Is it a typo?
A: We fix the typo, we update the performance in the one page rebuttal pdf.
**Q**: Many components in the approach and there is only one kind of ablation...
A: We include ablation studies to examine how changes in the search hyperparameters affect performance. This is shown in "Table: Search Hyperparameter Ablation Study on TSP 100" and "Table: Search Hyperparameter Ablation Study on TSP 1000". A lower value of $\alpha$ indicates that the local search algorithm prioritizes edges with higher heat map values, whereas a higher value of $\alpha$ aligns more with an MCTS style, similar to the approach described in [Fu et al., 2021].
**Q**: Experiments only on synthetic instances of the same distribution as training. Could be interesting to test on unseen distributions, e.g. TSPlib.
A: The objective of this paper is to demonstrate the applicability of UL for the TSP. To assess our model's performance, we conducted evaluations on the same datasets used in [Fu et al., 2021], which serves as our primary baseline. We leave the exploration of TSPlib datasets to future research work.
**Q**: Have the authors tried their model on even larger instances, such as TSP10000?
A: Regarding TSP-10000, It is important to highlight that for the TSP-10000 evaluation, these studies [1, 2, 3] also use the same test dataset containing only 16 samples. Consequently, due to the limited size of the dataset, the performance results may not be a reliable indicator. Using the graph sampling technique suggested in [3], we evaluate UTSP and achieve ~3.05% gap in approximately 1 hour. For reference, [3] reported a 4.3% gap in 21 minutes, DIMES achieved a 4.0% gap in 30 minutes, and 3.2% gap in 3.5 hours. DIFUSCO outperforms all others, as it reports the best performance with a 2.5% gap and a time cost of 47 minutes.
[1] DIMES: A differentiable meta solver for combinatorial optimization problems.
[2] DIFUSCO: Graph-based diffusion solvers for combinatorial optimization.
[3] Generalize a small pre-trained model to arbitrarily large tsp instances.
**Q**: Would the approach apply to the non-Euclidian asymmetric version of the TSP?
A: Modify the distance matrix $\mathcal{D}$ in Equation (2) to make it asymmetric.
**Q**: It would be useful to discuss ... how is the search space reduction (Sec. 5) affected by the choice of M?
A: We will add discussion on that, overall, increasing $M$ will have more overlapped cases. But then the search space is larger because we consider more possible edges.
**Q**: L156 “when selecting the city v given u ...
A: Here the candidate set of v means We only select city v from the top M heat map value or the nearest M cities. We will revise this sentence.
**Q**: “the negative values are the results of the rounding problem”... what is meant by the rounding problem?
A: Rounding problem: On many instances, the best known solutions reported by Concorde are not strictly optimal (confirmed in (Joshi et al., 2019), possibly due to round-off reasons), which could be slightly improved (< 10−2 ) by our algorithm (Fu et al., 2020).
Fu et al., 2020: Targeted sampling of enlarged neighborhood via Monte Carlo tree search for TSP
---
Rebuttal Comment 1.1:
Title: RE:
Comment: The author's rebuttal attempts to address your primary concerns. Do you have further questions or do your concerns still stand? | Rebuttal 1:
Rebuttal: Dear Reviewers, thank you for your comments.
We updated our model's performance in the tables included in the one-page rebuttal PDF.
We have incorporated additional baselines: POMO by Kwon et al. [2020] and more recent approaches such as DIMES by Qiu et al. [2022] and DIFUSCO by Sun and Yang [2023].
There is a typ in the TSP 500 column of the original manuscript. We have now corrected this error and updated the correct value, refer to the attached one page pdf.
In one-page rebuttal PDF, we also include Search Hyperparameter ablation study, an illustration of the Search Process and the overall structure of SAG. We also found better performance on TSP 100.
We will also update the manuscript, rewrite the introduction part and add a “related works” section to include recent advancements in data-driven methods for the Traveling Salesman Problem (TSP). Here is a preview of the proposed content for the "related works" section:
# Related Works
Researchers have been exploring the application of RL and SL techniques to tackle the TSP [Joshi 2022]. For example, [Kwon, 2020] uses a data-driven approach known as Policy Optimization with Multiple Optima (POMO), which relies on RL and avoids the utilization of hand-crafted heuristics. [Qiu, 2022] proposes a Meta-Learning framework that enhances the stability of REINFORCE-based training. [Sun 2023] applies SL and adopts a graph-based diffusion framework. Additionally, they introduce a cosine inference schedule to improve the efficiency of their model. [Fu 2021] also uses SL in their approach. They incorporate a heat map-based technique into an end-to-end model. Furthermore, they leverage graph sampling to extract small sub-graphs from the initial large graph. Subsequently, they train the GNN model on these sub-graphs to generate the corresponding heat maps, represented as probability matrices over the edges. Finally, the authors merge all the individual heat maps to create the final heat map.
Reference:
[Joshi 2022]: Joshi, Chaitanya K., et al. "Learning the travelling salesperson problem requires rethinking generalization." Constraints 27.1-2 (2022): 70-98.
[Kwon, 2020]: Kwon et al. (2020). “Policy optimization with multiple optima for reinforcement learning.” Advances in Neural Information Processing Systems 33 (2020): 21188-21198.
[Qiu, 2022]: “A differentiable meta solver for combinatorial optimization problems.” Advances in Neural Information Processing Systems 35 (2022): 25531-25546.
[Sun 2023]: “Graph-based diffusion solvers for combinatorial optimization.” arXiv preprint arXiv:2302.08224 (2023).
[Fu 2021]: “Generalize a small pre-trained model to arbitrarily large tsp instances.” Proceedings of the AAAI conference on artificial intelligence. Vol. 35. No. 8. 2021.
Pdf: /pdf/c2c90320fbdc0e56db310885df2f32c10ea0c5da.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Globally solving the Gromov-Wasserstein problem for point clouds in low dimensional Euclidean spaces | Accept (poster) | Summary: This paper consider the Gromov-Wasserstien (GW) problem with quadratic cost, a (non-convex) quadratic optimization problem over the space of probability measures (in this work, restricted to uniform discrete measures supported on $n$ points) which takes the form (in discrete setting):
$$\min_{\Gamma} \sum_{1 \leq i,j \leq n} ||x_i - x_{i'}|^2 - |y_j - y_{j'}|^2|^2 \Gamma_{ij} \Gamma_{i'j'},$$
where $\Gamma$ is a doubly stochastic matrix of size $n \times n$.
This problem is known to be hard to solve globally. This paper proposes an approach to do so that is tractable when the point clouds $X= (x_1,\dots,x_n)$ and $Y = (y_1,\dots,y_n)$ are in low dimension---say $d$ [edit: fix rendering]. The core idea is that the GW problem can be reparametrized as
\begin{equation}\tag{1}
\min_{(W,w), \Gamma} - |W|^2 - w + c_0,
\end{equation}
where $\Gamma$ is still a bistochastic matrix, $W,w$ must satisfy the relation $W = 2 X \Gamma Y^T$ and $w = \braket{L,\Gamma}$, where $L$ and $c_0$ are constant term (wrt $w,W,\Gamma$). Note that $W$ can be understood as the correlation between $X$ and $Y$ wrt the joint law $\Gamma$, which is of size $d \times d$, so losely speaking, the quadratic part of the GW problem does not depend on $\Gamma$ (which is of size $n \times n$) but only on the correlation it induces (which is a much smaller object when $d$ is small).
From this reformulation, the idea of the paper to globally solve GW is the following:
- Let $\mathcal{F}$ denote the constraint polytope which links $W,w$ and $\Gamma$. This space is too complex to be represented explicitly (by linear constraints).
- Build a bounding box of $\mathcal{F}$. This is doable because we know that the correlation matrix between $X$ and $Y$ has (explicit) bounds on its entries---for instance when $d=1$ this is simply the cost of the increasing (resp. decreasing) matching.
- Build iteratively a sequence of "lower approximation" $(H_N)_N$ of $\mathcal{F}$ (made of supporting hyperplanes of $\mathcal{F}$), in sense that minimizing the functional in (1) over $H_N$ yields a lower bound $L_N$ for the GW problem. Note: this is a concave minimization problem, which is tractable in low dimension ($d \times d + 1$ here).
- Let $(W_N, w_N)$ be the solution on $H_N$. Typically $W_N,w_N \not\in \mathcal{F}$ (otherwise, we have found a global minimizer of GW), but from this we can build a new constraint $H_{N+1}$ through solving a standard OT problem, providing a doubly stochastic matrix $\Gamma_N$ which is by definition sub-optimal, hence giving an upper-bound $U_N$ for GW.
- Eventually, the authors prove that $U_N - L_N \to 0$ as $N \to \infty$, so the proposed approach yields a practical (in low-dimension) algorithm that converges to a **global** solution of the GW problem.
Strengths: The paper considers the difficult (and important, in my opinion) question of globally solving the GW problem.
The proposed approach is based on existing ideas (parametrization of GW by low-dimensional matrices) but pushes them further to get an original and interesting approach. Having globally converging algorithms for GW, even if restricted to low dimensional setups with "not-so-many points", can be useful to assess the quality of other algorithms that may only converge locally (maybe some of them still converge globally "most of the time", etc.).
The paper is clear and does not sacrifice mathematical technicality. Proofs of Prop 2 and Thm 1 have been checked and no major flaw was identified (aside from small details).
I also appreciate that the paper immediately acknowledges that its approach is limited to low-dimensional problems (which is not a major issue; using GW in low dimension is completely natural).
Weaknesses: # 1. On the convergence of the algorithm
From my understanding, the proof of global convergence of the algorithm (Theorem 1) is purely asymptotic: the gap $\epsilon_N = |U_N - L_N|$ is controled by $|\theta_N - \theta_{N+k}|,\ (\forall k)$ where $\theta_N = (W_N, w_N)$ is compactly supported, which implies by contradiction that $0$ must be the single accumulation point of $\epsilon_N$.
While this makes sense (up to few technical considerations, see below), this can be considered as a weakness (at least from a theoretical viewpoint): the convergence of $\epsilon_N \to 0$ is controlled by "how fast $\theta_N$ accumulates", and even in low dimension, without further investigation (i.e. thinking that $\theta_N$ moves arbitrarily in a compact set), it may take a lot of time to reach low gap $\epsilon$ (and this seems to be suggested by the experiments, where the criterion is set to $10^{-8}$ when $d=2$, but to $10^{-2}$ when $d=3$).
I am not saying that this is what happen in practice, nor that this invalidates the contribution of this theoretical result, but a discussion on this may be welcome.
# 2. On numerical experiments
While the proposed experiments are conducted in a reasonable way, they remain somewhat limited in my opinion. In particular,
- [related to the point above] I would have appreciated to have more illustrative experiments on the algorithm behavior, its convergence, etc.
- As far as I can tell, the paper only compares with the work "local search" [16] (Peyré et al., 2016). Why not comparing with more modern works, as [17] (Scetbon et al., 2022), [D] by Sejourné et al. (2021) or [E] (Li et al., 2023)? Note that [17] seems to handle larger instances ($n=10^5$ points), but may fail to globally converge as far as I can tell. Showcasing the strengths of the current work vs [17] (probably the closest work in its spirit), even in illustrative scenarios, would be of interest.
# Minor comments (rather suggestions than actual weaknesses)
1. In Algorithm 1, the upper bound is updated as $U_{N+1} \leftarrow \min(U_N, \text{OT cost} )$. From my understanding, this makes $U_N$ non-increasing, while $L_N$ is increasing (as a minimization problem with more and more constraints), so that $\epsilon_N$ is decreasing. This is never mentioned as far as I can tell; but it seems to be used in the proof when saying that "if $\epsilon_N \not\to 0$, then it must be lower bounded". Am I correct? Also, the proof says "assume that $\epsilon_N = \dots$" (which to me means "assume that $U_{N+1} = \text{OT cost}$ rather than $U_{N+1} = U_N$"), but never discusses the possibility that $U_{N+1} = U_N$. In any case, this does not invalidate the proof as, from my understanding, the $\min$ in the algorithm is not required to prove convergence (even if we cannot ensure that $\epsilon_N$ is decreasing). But in any ways, I think that this is worth some clarification.
2. The discussion related work and context can be improved. For instance, I do not think that [16] is a suited reference in the introduction with mentioning the GW problem for the first time. It would rather be credited to either Mémoli (2011), or Sturm (2012 - "The space of spaces"). What is new vs known (from [16], or for instance from [A, Sec 2.2.3] and related works) in sections 2 and 3 should also be highlighted. Similarly, using that the quadratic term in GW only depends on $X \Gamma Y^T$ is also used (among other) in [B, C] (note : [C] is a preprint put on arxiv after neurIPS submission, this is a suggestion for the revised version, not a criticism).
3. [typo] line 90 : "has" should be "have" I think?
4. [typo] line 114, I think that "maximum" should be "minimum" ?
5. I wonder how useful are the variables $Z_N, \alpha_N$, given that $Z_N$ is simply $2 W_N$ and $\alpha_N = 1$. I understand that this is a convenient way to write a "general" hyperplane equation $\braket{Z,W} + \alpha w \leq \beta$, but to me it turned out to hinder the reading a bit.
6. [typo] line 168, "that the that".
# References
- [A] A contribution to Optimal Transport on incomparable spaces, T. Vayer, 2020
- [B] On the existence of Monge maps for the Gromov-Wasserstien problem, Dumont et al., 2023.
- [C] The Gromov-Wasserstein distance between spheres, Arya et al., 2023.
- [D] The Unbalanced Gromov Wasserstein Distance: Conic Formulation and Relaxation, Sejourné et al., 2021
- [E] A Convergent Single-Loop Algorithm for Relaxation of Gromov-Wasserstein in Graph Data, Li et al., 2023
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: I think that the work may be improved by adding:
- A discussion / numerical illustration on the convergence (rate) of the algorithm (Section 1. in Weaknesses), or an explanation of why this is not relevant.
- A more precise comparison with concurrent works, in particular [17] (which is the closest to the current work as far as I can tell), from both a theoretical and numerical perspective ; where scalability but also quality of the result (which, I guess, may favor the proposed approach). If the comparison is not meaningful, please explain why.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The authors clearly state that the work is dedicated to low-dimensional point cloud, which is its main limitation.
Aside from that, I do not see any potential societal impact specific to this work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you sincerely for your comments and questions. See below for our answers.
Weaknesses:
On the convergence of the algorithm:
We are working on quantifying the convergence rate. The number of iterations are bounded by ($O((1/\epsilon)^{\ell_x\ell_y+1})$), to cover the compact set, but the low dimensional QP problem is increasing in complexity for every iteration. The experiments in the supplementary material 1.4.1 suggest much faster rates, but this is work in progress.
If the problem is extremly symmetric, e.g. equidistant points on the surface of a sphere are being matched to equidistant points on another sphere, then the convergence would be very slow. We have not introduced symmetry breaking concepts into the scope of the method yet.
On numerical experiments:
1. We are planning to add another experiment matching geometrical data e.g. MNIST objects, where we can highlight issues with symmetry using local methods. In brief, comparing two figures in MNIST takes in mean 0.7 seconds (standard deviation 0.39 seconds) running in mean 64 iterations (standard deviation 24 iterations) on problems of size 169 points in mean (standard deviation 30 points). In comparison [16] has a mean relative error of about 10% in its measurements of the GW discrepancy which for some figures are more than the mean gap between the different classes.
2. Indeed, the result in [17] and [D] increase the performance solving the linear OT, which can be incorporated in our algorithm as well. However the main purpose of this paper is to show a simple method to globally solve the GW problem with accuracy certificates. Since we have not optimized the subparts of our algorithm using the computational methods from, e.g., [17], we thing that is makes more sense to compare with [16]. Regarding scalability, note that our method fully decomposes the computations in terms of non-convex low dimensional quadratic problems and linear OT problem, thus one can apply any of the modern tricks on the OT problems in order to improve speed and problem sizes. This has not been a focus of this work and we have just used of the shelf solvers for the OT problem in order to simplify the description and implementation.
Minor comments:
1. Indeed, the upper bound $U_N$ is the least upper boud, which decreases, this the gap is monotonically decreasing. This can be stated more clearly.
2. Indeed, this is an error that occured during redisposition of the manuscript. To our understanding, the Gromov-Wasserstein distance should be credited to Mémoli [13] or the 2007 proceedings article. Thank you for pointing out the newly published work!
3-6. Indeed, thank you for finding the typos.
Questions:
1. We have answered this under section weaknessess.
2. The proposed method considers the topic of a global optimum of the GW distance. The main point of the paper is to show that one can find a global optimal solution within a reasonable time, and we do not claim to be faster than [16] or [17] on any GW-problems. The local search algorithms converge faster to a local optimum, but the advantage of our approach is that we are guaranteed to reach a global optimum. For many applications, this can be critical.
Also note that we have not focused on optimizing the computational complexity in each of the subparts of the algorithm. For example, we do not utilize Sinkhorn’s method for solving the optimal transport subproblems (Equation 9). It is true that [17] utilizes low rank structures similar to the ones we consider for improving the computational complexity of [16]. Similarly, we could use the methods from [Scetbon, Cuturi, Peyré - ICML, 2021] for improving the computational speed of the subproblems in our method. But since we have not optimized these computations, we think that it makes more sense to compare with the method in [16]. We will add this comment to the final version of the paper to motivate why we compare with [16] instead of [17].
---
Rebuttal Comment 1.1:
Title: Thanks
Comment: Thank you for taking time to answer my question. I am fairly convinced by the point
> Since we have not optimized the subparts of our algorithm using the computational methods from, e.g., [17], we thing that is makes more sense to compare with [16].
and as such will keep my supportive grade.
---
Reply to Comment 1.1.1:
Comment: Thank you! | Summary: The Gromov-Wasserstein optimal transport problem is a non-convex problem known to be hard, closely related to QAP. The authors consder its special case when the two involved metric spaces are Euclidean with small numbers of dimensions. We want to permute one set of points such that the sum of squares of differences of Euclidean distances between all point pairs is minimized. This simplification of this problem (compared to the general GW OT problem) is possible because the matrix of Euclidean pairwise distances for any set of points in $R^k$ has rank at most $k+2$. This allows the authors to formulate the problem as minimization of a simple concave quadratic function of a small ($kl+1$ where $k,l$ are the dimensions of the spaces) number of parameters over a convex polyhedral feasible set, which is a projection of the set of doubly-stochastic matrices. This problem is solved to global optimality by alternating two steps: (1) globally solve the concave minimization problem over an outer approximation of the feasible set, (2) improve this outer approximation by generating a cutting plane (a linear inequality valid for the above projection of the set of doubly-stochastic matrices). Step 1 can be done either by an off-the-shelf concave minimization algorithm (such as branch&bound) or by a reasoning over the set of extremal points of the feasible set (which is fast for low-dimensional spaces). Step 2 leads to solving the ordinary linear (Kantorovich) OT problem. The algorithm is proved to converge to a global optimum.
The proposed algorithm is first tested on synthetic data. Here, we match two randomly generated (either uniformly on a disc or normally distributed around the origin) point sets from $R^2$ or $R^3$. This is compared to other globally optimal methods solved by Gurobi and to the local search method [17] with multiple initializations. Second, the method is tested on a real application from biology. These experiments show that the method is almost always much faster than the other methods (the global methods are in fact usable only for small instances, due to the large number of parameters).
Strengths: A simple algorithm for a difficult problem, without many tuning parameters, which performs well in experiments.
Clearly written.
Weaknesses: The most important issue is limited experimental testing. The main experiment is done on synthetic, randomly generated data. However, it is known that optimization algorithms often behave very differently on random data and on real data.
Moreover, I do not quite understand the setup in the biological experiment, especially I get lost in the 1st paragraph in section 5.1. Please explain better. What is $n, l_x, l_y$ in this case?
To show more clearly the strengths and weaknesses of the approach, it should be tested on as many instance types as possible. Currently I am not convinced that the method would not last very (unacceptably) long for some instances from practice.
One option is to make more extensive synthetic experiments, with more complex data than just uniformly/normally distributed. E.g., one can take a non-random set of points in 2D or 3D (draw a shape or use a shape from a public shape database) and to synthetically generate the second set by permuting these points and adding noise to them (gaussian, uniform) or replace some of them with outliers.
Another obvious suggestion is shape/object matching on real data, where the point distances are measured by Euclidean metric (rather than geodesic). Possible inspirations for such experiments is [13, 17] (but, I believe, also other works).
As for novelty: Although the method just combines tools well-known in optimization, the main idea (decoupling the concave minimization part and the cutting plane part) is clever. Unfortunately, I do not know the relevant literature well enough to be sure that a similar approach has not been proposed before. I am surprised that no globally optimal algorithm for globally solving this problem has been proposed before by others - such algorithms have been apparently proposed only for general GW problem and QAP.
The initial formulation (1) of the GW problem seems to be wrong because, to my knowledge, the GW optimal-transport problem optimizes over doubly-stochastic matrices rather than permutation matrices (i.e., it allows "soft matching"). Therefore, the relaxation from (4) to (5) is unnecessary and in fact misleading. If (5) minimizes a concave function over a convex set, the two problems are indeed equivalent, but the explanation should go the other way.
Minor issues / suggestions:
- 77: "the first two sums" should be "the 1st and 3rd sum"
- In (2), it would be logical to omit the 2nd and 3rd terms because they do not depend on Gamma (as noted above).
- 92: You may wish to justify/cite why a Euclid distance matrix has rank at most $l_x+2$.
- Proposition 1 should not be really a proposition, it is just a algebraic manipulation.
- 133: The part defining the set $\mathcal{F}$ is over-complicated and confusing. Why don't you write just
$\mathcal{F} = \\{ (2X\Gamma Y, \langle L,\Gamma\rangle) \mid \Gamma \in P \\}$, which shows that $\mathcal{F}$ is a linear map of $P$.
- In the final version, Table 1 should report also the number $N$ of iterations (= added cutting planes) needed to achieve the prescribed accuracy. Currently, this is only in the supplement.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Why were experiments on more real instance types not included, such as with shape/object matching? Is there a substantial obstacle or you just considered this as out of the scope of this paper?
Can the method be extended to be resilient to outliers? One option for this would be to use 1-norm rather than 2-norm in pairwise distances - but this would violate the low-rank assumption and make the approach inapplicable.
Can the trick be applied to a wider class of problems, such as a wider subclass of QAP?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The experiments are limited, not convincingly showing efficiency of the method on a wide enough class of real instances.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you sincerely for your comments and questions. See below, for our answers.
Weaknesses:
Limited experimental testing:
The main focus of this paper is on difficult problems that appear in, e.g., computational biology and where there are symmetries in the data and thus there is a large set of local optima. For many problems with image matching on simple figures (cats, dogs, etc.), both local search methods methods and the proposed global optimization method would be very quick. The local search would typically give the correct result and be quicker, but without any guarantees. Our method would be slower, but still fast, and come with guarantee. However, it makes sense to add some additional such examples, and illustrate that our method works also for this case. Thus we will include additional numerical simulations on known simple geometrical objects, such as MNIST. In brief, comparing two figures in MNIST takes in mean 0.7 seconds (standard deviation 0.39 seconds) running in mean 64 iterations (standard deviation 24 iterations) on problems of size 169 points in mean (standard deviation 30 points). In comparison [16] has a mean relative error of about 10% in its measurements of the GW discrepancy, which for some figures are more than the mean gap between the different classes
Setup in the biological experiment:
In the biological experiment $n$ is the number of points, $\ell_x$ is the dimensionality of the space which the points in $X$ is located in and $\ell_y$ accordingly. In the biological experiment in section 5.3: $n=500$ and $\ell_x=\ell_y = 2$ (since we compare 2-d images). This shall be included in the text. Thank you for pointing this out.
Euclidean metric: By using the Euclidean metric rending a high rank on the distance matrices with mixed positive and negative eigenvalues, such problems falls outside the class for which our paper considers. We are currently working on extending the results in this direction.
GW problem: The original Gromov-Wasserstein description [13] indeed relax the problem to general marginals, and we shall add this in the introduction before Equation (1) to correct the terminology. Thank you for pointing this out. When the marginals are uniform over n points the result is still valid with permutations as the resulting matching, and closely related to the pre-model using couplings and correspondences in the Gromov-Hausdorff distance. One may consider discrete marginals $\mu$ and $\nu$ in this model. Then, the only difference is that $L = (1^T\mu + 1^T\nu) m_xm_y^T -4m_x\nu ^T Y^T Y-4X^T X\mu m_y^T$ and $c_0 =(\langle C_x,C_x\rangle+ \langle C_y,C_y\rangle -4\nu^Tm_y \mu^Tm_x)/2$.
The minor issues:
line 77: Indeed, thank you for pointing this out!
Eqn (2): We decided to keep this as it is relevant for the discrepancy, even though not relevant for the optimization problem.
Line 92: We can add this close to (3) where this is clear by a sum of matrices with rank 1, $\ell_x$ and 1.
Proposition 1. Yes, but we added to increase readability and clarity.
Line 133. Thank you for this suggestion!
Table 1. Yes, we shall add the number of added cutting planes to the tables.
Questions:
We realize that there is a gap between the random data and the biologic example. We are planning to add a low dimensional geometrical data examples e.g. MNIST to show the performance and when symmetry becomes the key issue for local search methods. Such data is in scope for the method indeed.
In order to handle ouliers and 1-norm distance matrices we are violating the low rank assumption indeed. Currently such problems are not within the problem class considered in the paper, but we are looking to generalize the framework for handling such problems.
As the QAP problem is restricted to permutations, it would be a natural extension to consider a wider class of matrices. When the matrices contain both positive and negative eigenvalues, this becomes a much harder problem, which is currently out of scope of the method, but we are looking into handling such problems.
---
Rebuttal Comment 1.1:
Comment: Thank you very much for your rebuttal. I find (along with the other reviewers) the algorithm clever and nice, which will probably ensure acceptance. However, I still believe it is a pity that you did not include a wider range of data types in the experiments. Your explanation why you did not do this is that "The main focus of this paper is on difficult problems that appear in, e.g., computational biology and where there are symmetries in the data and thus there is a large set of local optima." I find this explanation unfair because there are several other scenarios with symmetries in the data (and hence many similar local minima), such as: shape (represented by point cloud) matching where the shape has symmetries, or feature matching in stereo images with repeated patterns (e.g., matching two views of a building with many similar windows). These experiments could be on both synthetic and real data. Including such experiments might ensure a greater impact of the paper. (I find your planned MNIST experiment insufficient in this respect.)
However, I find the paper acceptable even without these additional experiments. Based on other reviews and rebuttals, I increase my rate to "weak accept".
One more suggestion (you may ignore it): some reviewers complain about missing convergence rate analysis. This is somewhat (weakly) related to the hardness of the problem solved. Clearly, the QAP problems and its low-rank version are NP-hard. But how about approximability? Perhaps, low-rank QAP is easier to approximate than the general QAP. Is this known? If the low-rank QAP has no FPTAS (which I assume), the convergence rate cannot be polynomial in problem size and $1/\epsilon$. Perhaps, a remark on this might be useful.
---
Reply to Comment 1.1.1:
Comment: Thank you for the reconsideration of the score!
We will include a more detailed discussion about the convergence rate.
The shape examples used in [16] (https://github.com/gpeyre/2016-ICML-gromov-wasserstein/tree/master/code/data/shapes), has a mean relative error of 3% using [16] compared to the global result, when sampling 500 points from the images. | Summary: - The paper introduces a novel algorithm for efficiently computing the Gromov-Wasserstein (GW) distance for low-dimensional point clouds. The proposed method offers a transformative approach by transforming the GW distance problem into a sequence of concave optimization problems over convex sets, thereby improving computational efficiency.
- By leveraging this new algorithm, the authors conducted comparative evaluations against existing techniques for computing the GW distance. The results demonstrate significant improvements in terms of computational speed.
Strengths: - One of the key strengths of this paper is the introduction of a novel and interesting algorithm. The proposed method offers a fresh perspective on computing the Gromov-Wasserstein (GW) distance for low-dimensional point clouds.
- The research holds significant importance in the field of machine learning and data science, given the growing popularity of the GW distance. By proposing a method that accelerates the computation of the GW distance, the paper addresses a practical need and offers a valuable contribution to the field.
- The clarity of presentation is a notable strength of this paper. The authors effectively communicate their ideas, methodologies, and findings, making the paper accessible to readers.
- The experiments well support the acceleration effect of the proposed algorithms.
Weaknesses: A potential weakness of the paper is the lack of a complexity analysis for Algorithm 1, which could be addressed to further enhance the paper.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: - line 190: in the last inequality, am I right that a square root is missing?
- line 196: Is there a typo in the lower bound?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 4 excellent
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you sincerely for your comments, feedback, and questions. See below for our answers.
Weaknesses:
We agree that this is a weakness and we are currently working on quantifying the convergence rate. For a given dimensionality and number of points, it should be straightforward to show that our algorithm finds an $\epsilon$ accurate solution in $O((1/\epsilon)^{\ell_x\times \ell_y+1})$ iterations. However, this trivial complexity bound is not very satisfying and it doesn't take in consideration the increasing complexity of the QP problem. We believe that it should be possible to prove a faster convergence rate. In practice, we have experienced that the method converges much faster as exemplified in section 1.4.1 in the supplementary material.
Questions:
1. Line 190: Yes, thank you for finding this error.
2. Line 196: The notation might be confusing, what we mean is that the lower bound is limited to the largest possible $\|W\|_F$ obtained in the bounding box. We will improve the readability of this.
---
Rebuttal Comment 1.1:
Comment: Thank you for answering my questions. After reading the rebuttal and other reviews, I would like to maintain my current score. | Summary: This paper solves the Gromov Wasserstein (GW) distance problem for squared Euclidean norm by considering a low-dimensional space on which the computation is performed, using a cutting-plane method. To this end, they write the GW problem as a low-rank optimization problem. Their algorithm is supported by theoretical convergence guarantees.
Strengths: This article makes it possible to compute the Gromov-Wasserstein distance efficiently. In a way, this paper extends the results of [17] which reformulated the Gromov-Wasserstein problem as a low-rank quadratic problem. The idea of using the cutting plane algorithm to efficiently solve this optimization problem in a projected low-dimensional subspace is new, and the proposed algorithm converges to a global optimal solution.
Weaknesses: Solving problem (9) at each iteration of the algorithms remains computationally heavy, as it corresponds to solving an optimal transport problem with $\Gamma$ a $n\times n$ matrix.
For a more relevant contribution, and as the use of the low-rank quadratic structure of the Gromov-Wasserstein optimization problem has already been exploited in [17], the authors should compare their results with [17] in experiments, and not only with the local search method of [16].
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: The matrices $C_x$ and $C_y$ are assumed to be positive definite and of low rank. How can the low-rank assumption be ensured? What does this mean in terms of the point clouds $X$ and $Y$?
Can the cutting plane algorithm trick be extended to the case where both point clouds have weights (i.e. a non-uniform discrete measure), and for which the probability measures do not have the same number of points (therefore the mass in the transport plan $\Gamma$ could split)?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: A comparison with more recent methods than [16] for computing the Gromov Wasserstein distance should be included.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you sincerely for your comments and questions. See below for our answers.
First we would like to clarify that the paper is not just an extension of the work [16] and [17]. Even though we also consider the GW-problem, our method is guaranteed to reach the global optimum for the problems we consider. This is to the best of our knowledge the first results that guarantees a global optimal solution for a class of GW-problems where the domain is multidimensional (for 1-d such results exists). Also, note that even though the work [17] uses similar low rank structures, the structures are used completely different. In [17], the low rank structures are used to speed up the method from [16]. In our paper, the low rank structure is used to ensure convergence to a global optimum (and not just a local).
Weaknesses:
The proposed method considers the topic of a global optimum of the GW distance. The main point of the paper is to show that one can find a global optimal solution within a reasonable time, and we do not claim to be faster than [16] or [17] on any GW-problems. The local search algorithms converge faster to a local optimum, but the advantage of our approach is that we are guaranteed to reach a global optimum. For many applications, this is critical.
Also note that we have not focused on optimizing the computational complexity in each of the subparts of the algorithm. For example, we do not utilize Sinkhorn’s method for solving the optimal transport subproblems (equation 9). It is true that [17] utilizes low rank structures similar to the ones we consider for improving the computational complexity of [16]. Similarly, we could use the methods from [Scetbon, Cuturi, Peyré - ICML, 2021] for improving the computational speed of the subproblems in our method. But since we have not optimized these computations, we think that it makes more sense to compare with the method in [16]. We will add this comment to the final version of the paper to motivate why we compare with [16] instead of [17]
Questions:
1. With Squared Euclidean distance, this is ensured trivially as $C_x$ and $C_y$ will have rank at most $\ell_x+2$ and $\ell_y+2$. For more general settings, it gets a bit more complicated but this is out of scope of this paper. Work on low rank approximations of point clouds to their distance is a whole separate field of investigation, which we can add references to.
2. Yes, then $L = (1^T\mu + 1^T\nu) m_xm_y^T -4m_x\nu ^T Y^T Y-4X^T X\mu m_y^T$, $c_0 =(\langle C_x,C_x\rangle+ \langle C_y,C_y\rangle -4\nu^Tm_y \mu^Tm_x)/2$ where $\mu$ and $\nu$ are the marginals.
Limitations:
Please see the answer in the weaknesses section.
---
Rebuttal Comment 1.1:
Comment: Let me just remark that the sentence "This is to the best of our knowledge the first results that guarantees a global optimal solution for a class of GW-problems where the domain is multidimensional" may be somewhat naive. Indeed, there are such algorithms - e.g., exhaustive search over all permutations or (spatial) branch-and-bound. What you probably meant is "in reasonable time in practice".
(Note, I am a different reviewer.)
---
Reply to Comment 1.1.1:
Comment: Indeed, you are completely correct about this. Thank you.
Title: Answer to Reviewer yHuE
---
Rebuttal Comment 1.2:
Title: Response to the authors
Comment: Thank you for your detailed reply and for your intention to add to the experiments section one with the real-world MNIST dataset. From the answers you have provided to the various reviewers, the efficient global optimization approach you propose for GW computation seems more innovative than at first sight, and in particular the reason for comparing your method with [16] is more convincing. As the presentation of the general idea and theoretical parts are, in my opinion, particularly clear, I will increase my score from 4 to 6.
---
Reply to Comment 1.2.1:
Title: Answer to Reviewer G7H7
Comment: Thank you for your reconsideration! | Rebuttal 1:
Rebuttal: Dear reviewers,
Thank you sincerely for the work you have put into reviewing the manuscript paper. Your reviews have been thorough, pointing at strengths and weaknesses in the paper and suggested clarifications that benefit the presentation and clarity of the paper. We will do our very best to incorporate the suggestions and remarks under the page limit constraint. In the rebuttal section in each review, we have answered questions and comments in order to make it more clear what answer goes to what question.
Again, thank you for your reviews and feedback. | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: This paper proposes a new algorithm to solve the Gromov-Wasserstein problem between two sets of points in Euclidean spaces when the ground cost is the squared Euclidean norm. This is done by first reformulating the Gromov-Wasserstein problem as a low-rank QAP problem,
then relaxing the set of admissible couplings in a way that the optimal solutions of the obtained relaxed problem stays the same, and then using a cutting-plane method to solve the relaxed problem. The obtained algorithm provides at each iteration a lower bound and an upper bound on the value of the optimal solution, and it can be proven that the gap between those bounds theoretically converges to zero at infinity, implying that the proposed algorithm theoretically converges to a global optimum. The experiments offer a comparison in term of computational efficiency between the proposed algorithm and a more "traditional" algorithm, and emphasize the importance of converging to a global optimum when using Gromov for applied problems.
Strengths: I enjoyed reading this paper. To the best of my knowledge, the proposed algorithm is new.
Beside the proposed algorithm in itself, the paper highlights that most of algorithms which compute Gromov only converge to a local minimum, which is a problem that is often ignored in the literature. Moreover, when solving the
Gromov-Wasserstein problem with "traditional" algorithms, it is not even possible to know in practice whether the solution
we've converged to is a local or global optimum. Thus, proposing an algorithm that is guaranteed to converge towards the global optimum is a real strength in my opinion.
Weaknesses: - As someone familiar with the Gromov-Wasserstein problem but not expert in optimization, I find this paper a bit difficult
to read because some important steps in the mathematical reasoning are lacking of details and so, in my opinion, too much of the reasoning is left to reader.
- I think the experiment section could be improved: the experiment of Figure 2 could be clearer, and it could be nice
to have a synthetic experiment that exhibit a case where the proposed algorithm converges to the global optimum whereas the local search method remains stuck in a local optimum.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: The following questions are related to the steps of reasoning that I find could be clearer:
- 1/ What is the definition of the rank of a QAP problem?
- 2/ Why do we need to relax Problem (4) to doubly stochastic matrices instead of permutation matrices before applying
the cutting-plane method?
- 3/ Can you explain how to obtain the initial $ (Z_r,\alpha_r,\beta_r)_r $ from solving the bounding box problem?
- 4/ I'm a bit confused about Equation (8a): are we looking for a different $ \Gamma $ for each $ i,j$? The way it is written suggests we're looking for the same for all $ i $ and $ j $.
- 5/ $ N $ stands for the current iteration number, but also for the number of initial constraints. I find that confusing.
Experiments:
- 1/ In Figure 2 left: are the differents structure plotted are local optima obtained using a local search method? I'm not sure
to understand Figure 2 right either: the blue is the result of your algorithm? can you explain why it is better than the result with the local search method and where do we see the comparison with the expert evaluation?
- 2/ How the methods compare itself (in terms of computational efficiency) to approached methods (as for instance Entropic Gromov, sliced Gromov,etc) with several random initializations?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 4 excellent
Limitations: The authors has not clearly discussed the limations of the algorithm.
In my opinion, the principal limitation is that the method remains computationally costly, and so, as this is the case for almost all methods that compute Gromov, it is still unusable in higher dimension or when the number of points is large.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you sincerely for the comments, feedback, and questions. See below for our answers.
Weaknesess:
1. We will go over the mathematical derivations again and try to improve the readability for the final version of the paper.
2. We are planning the expand the numerical results in the final version. We will improve the description of the experimental setup in Figure 2. In the supplementary material 1.4.3, we presented the statistical performance of the local search method on this example, which in mean has a relative error of above 10%. We will move this into the paper in the final version. We also plan to change Figure 1 so that it illustrates that one could get stuck in a second local optimum using a local method. Here we can identify the whole region of attraction for the local optimum that is not a global optimum.
Questions:
1. A QAP problem may be written on the standard form $\min_{\Gamma \in P} trace(W\Gamma D\Gamma^T)$. Now $ trace(W\Gamma D\Gamma^T) = vec(W\Gamma D)^T vec(\Gamma) = vec(\Gamma) ^T (D^T\otimes W)^T vec(\Gamma)$. Then we denote the rank of $D^T\otimes W$ as the rank of the QAP. In the GW case using the squared Euclidean norm we have the quadratic expression $trace (X^TX \Gamma Y^TY \Gamma^T)$. If we identify $W = X^TX$ and $D = Y^TY$, then the rank is the product of the dimensionality $l_x$ and $l_y$ of $X$ and $Y$ respectively, because of the eigenvalue structure of the kronecker product.
2. We are not aware of any convenient way to represent the set of points that corresponds to permutations in the restricted low dimensional space. The main purpose of the relaxation is to obtain easier subproblems, and to the best of our knowledge, optimizing over permutation matrices is in general a difficult combinatorial problem.
3. Equation 8 describes the bounding box problem, as they describe the min and max values of the elements in the low rank representation. Equation 8a are $\ell_x\ell_y$ equations indexed by $i = 1... \ell_x$ and $j = 1...\ell_y$ giving $Z_{i,j} = \delta_{i,j}$, $\alpha_{i,j} = 0$ and $\beta_{i,j} = max_\Gamma (2X\Gamma Y^T)_{i,j}$ for the maximum expression for example.
Note also that these inequalities can be written more plainly as $W^{\min}\le W\le W^{\max}$
where $W^{\min}$ and $W^{\max}$ are specified elementwise by the minima and maxima in Equation 8a.
4. Equation 8a is $\ell_x\ell_y$ equations with one optimal $\Gamma$ for each $(i,j)$.
5. N stands for the number of constraints. We will clarify this and how it relates the iteration number in the final version.
Experiments:
1. The proposed algorithm provides a sequence of permutations that may be close to locally optimal as visualized in figure 2 left. The objective value in these permutations may be very close to the global optimum (written in relative error) but suggest a permutation which is very far away from the global optimum permutation geometrically, here being rotations and mirror symmetries. This is only to exemplify the complexity of the problem to be solved. Figure 2 right shows classification quality metrics compared to the expert classification (ground truth), i.e., 1 would be a perfect result. The figure shows that the proposed method results in a better solution with regard to all the quality metrics. We will try to clarify this better in the final version.
2. A key advantage with this method is that one knows when optimum is reached. When considering local search (e.g., Entropic Gromov) it is difficult to know how close the best solution is to the optimal solution. In particular, this is the case when considering problems with symmetries, which are common in for example computational biology. In the final version, we will more clearly highlight the advantages of our global optimization approach compared to local search with multiple initializations. The sliced Gromov-Wasserstein version handles problems when the spaces are of different nature. However having a distance matrix, under certain conditions it is possible to find a representation in Euclid coordinates.
Limitations:
It is true that the method is computationally costly if the number of dimensions increases. However, for small dimensional problems (e.g., 2d and 3d objects which are common in practice), the method scales well in the number of points (in contrast to most other methods for solving GW problems).
---
Rebuttal Comment 1.1:
Title: More precisions and suggestions
Comment: Thank you for your clarifications that are quite helpful. I'm fairly conviced that this paper makes a significant contribution in proposing
an algorithm for solving the GW problem which converge to a global optimum or which indicates how far we are from it. Yet, I still think this paper could be improved on the presentation of the theoretical results and in the choice of experiments.
On the theoretical results:
- In my opinion, a justification similar to your answer to question 1/ on the low-rank structure of the problem should definitively be included in the final version since I'm not sure that all readers interested in GW are familiar with the standard form of a QAP.
- Same goes for 2/: in my opinion the relaxation should also be justified, even if it is a classic "trick" of optimal transport theory. Furthermore,
as Reviewer yHuE pointed it, this relaxation is misleading because in the classical form of the GW problem, the optimization is performed
on the set of double stochastic matrices and not permutation matrices. This is not a big issue in itself, but what makes it misleading is
that the whole point of section 3 is to derive Expression (5) that you call "low rank formulation" of the GW problem. Yet, (5) is already known
in the literature, see proposition 1 of [16] or section 2.2.3 in [A]. Hence the goal of section 3 is actually to recover a known formula and to point out that this formula has a low rank structure when the cost matrix $ C^x $ and $ C^y $ are squared distance matrix, while the way it is written suggests (in my opinion) that you are deriving new formulas specifically for the algorithm you propose.
On the experiments:
- A concern I have is that you compare yourself principally with [16] that is basically an entropic-regularized solver of the GW problem, while
your proposed algorithm is solving, if I'm understanding correctly, the non-regularized problem. This is especially confusing since
[16] is also cited in the doc of Python Optimal Transport for the non-regularized solver of the GW problem. First, I think you should precise
more clearly that you're using the Entropic solver and what parameter of regularizations do you use in your experiments. Second, I think
you shoot a bit yourself in the foot by not comparing yourself with the non-regularized solver because (i) you only compare yourself with
a solver that tackles an "easier" problem, (ii) to the best of my understanding, we cannot conclude currently if in Figure 2 right, the gain of performance is due to converging to a global optimum or to solving the non-regularized problem instead of the regularized one.
- I think Figure 2 left and right should be separated in two distincts figures, because there consists in two distinct experiments (although being over the same dataset) and so currently, its difficult to understand what are the goal of these experiments when reading the paper.
[A] A contribution to Optimal Transport on incomparable spaces. Vayer, 2020.
---
Reply to Comment 1.1.1:
Comment: Thank you for you further comments.
Indeed, proposition 1 should be cited to the literature to the reference you so helpfully provided. We were not aware of this work. Also in the definition of the GW distance we will correct the terminology so that it is clear that the GW problem typically refer to the optimization problem with stochastic matrices (see also answer to reviewer yHuE).
For comparing with [16], we have used the package provided by their group on Github. Here it is possible to set the regularization parameter to 0, in which case network simplex is used. This is what we are comparing with. We will clarify this in the final version of the paper. | null | null | null | null | null | null |
Visual Instruction Inversion: Image Editing via Image Prompting | Accept (poster) | Summary: The authors propose a method for finding a text-based editing direction extracted from a pair of “before” and “after” images depicting the desired edit. Using a fixed, pretrained diffusion model (in this case Stable Diffusion), the authors optimize the text-conditioning embedding to align with the CLIP-space direction between the two images. This is done will minimizing a reconstruction loss to ensure that the main details of the original image are preserved. Qualitative and quantitative results show the effectiveness of the method when compared to existing approaches (InstructPix2Pix, SDEdit, and other image editing techniques such as Imagic). The authors also provide a useful analysis of how the noise and initialization influence the generation process when aiming to edit a given image.
Strengths: - The authors present a simple, yet effective technique for extracting text-based editing directions in a pretrained diffusion model. This is particularly useful when a desired edit is difficult to describe with language.
- Extracting this direction requires only a single exemplar pair, making it applicable and easy to use across many settings.
- The instruction concatenation technique is very interesting and beneficial for providing users more control in cases where the extract direction is not perfect. I almost missed this, but found it to be a very interesting addition. I would consider adding this to the main paper if possible as it provides an additional advantage over alternative techniques.
- Comparisons are performed to a variety of techniques including InstructPix2Pix and SDEdit. Further evaluations are performed on Textual Inversion, Imagic, and Null-Text Inversion. The results provided by the authors seem to out-perform existing techniques across a variety of edits and images.
Weaknesses: - It appears that the proposed method is more effective on style-based edits, as I could not find any examples to require a large change in the structure of the object. How does the method fair when the prompt pair depicts a change in structure (e.g., sitting/jumping)? I could not find examples or a discussion on this.
- In Figure 7 the authors show that a single example may not generalize well to all different test sets. This is expected, but I am wondering how sensitive the method is to the choice of the exemplar pair. Assessing this could be challenging, but one possible way to assess this could be evaluating the success rate of a single input pair across a set of test images. This could be done either quantitatively (e.g., using CLIP-space similarities) or qualitatively using a user study, if possible.
- In Table 1, the authors demonstrate that adding additional exemplars can help achieve better results. It would be great to see visual results where multiple pairs assisted in improving the results.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: **General Comments:**
- The idea of the cosine direction to represent the editing direction has been used previously (e.g., StyleCLIP, StyleGAN-NADA). These works should be cited when introducing the CLIP-direction loss (Equation 3).
**Questions:**
- Would starting from an initial prompt depicting the desired edit speed up convergence? Similarly, does the optimization process converge faster if we provide more pairs?
- What is the significance of using InstructPix2Pix as the network? Would simply using the original Stable Diffusion model still lead to good editing results? This would be helpful to see whether the proposed scheme is robust to different diffusion models.
- Is there any significance to the “tokens” learned in the optimized conditioning text embedding? That is, is it possible to interpret the optimized token in human-understandable tokens? E.g., does quantizing the learned embeddings to real tokens lead to a coherent text prompt?
- How does changing the number of learnable tokens affect the result? How was the choice of using 10 tokens made? It would be very useful to further analyze this design choice with an ablation study.
- For the quantitative evaluations (Section 4.3), the authors specify that 300 editing directions were sampled from the clean-instructpix2pix dataset. Were these prompts used in the training of InstructPix2Pix?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations: The authors discuss several limitations of their method including the reliance on the diffusion model and the ability to perform small edits over the image.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your thoughtful feedback! We address your questions/concerns below.
**It appears that the proposed method is more effective on style-based edits, as I could not find any examples to require a large change in the structure of the object. [...]**
Edits that “require a large change in the structure of the object” are typically more challenging than style-based edits.
We do have some examples that might answer your question:
- Supplementary (Figure 4, Last row): Add a gun, add sunflowers, add roses.
- Supplementary (index.html file, Case ID: 0282472): Add a herd of deer
- Supplementary (index.html file, Case ID: 0135140): Add a thunderstorm
- Supplementary (index.html file, Case ID: 0225198): Add a tiara
- … and more.
**[...] a single example may not generalize well to all different test sets. This is expected, but I am wondering how sensitive the method is to the choice of the exemplar pair. [...] one possible way to assess this could be evaluating the success rate of a single input pair across a set of test images. [...]**
This is a very interesting question and suggestion. We have performed an initial investigation, but do not see any clear pattern yet. We will continue to look into this and report any interesting findings in the final version.
**In Table 1, the authors demonstrate that adding additional examples can help achieve better results. It would be great to see visual results [...].**
Thank you for your suggestion! We have created a Figure for this, please refer to Additional PDF File (Figure 2). We will also add this Figure into our Supplementary material.
**Would starting from an initial prompt depicting the desired edit speed up convergence? Similarly, does the optimization process converge faster if we provide more pairs?**
We find that initialization from the ground-truth prompt hurts performance (Table 1, Row 4-8).
We hypothesize this may be because human-language can be mismatched with machine-preference [1].
More example pairs will help to learn the desired edits faster and more precisely. Consider the settings where we have fixed all hyper-parameters and vary only the number of examples. The results (Table 1, Rows 4-8) indicate that under the same optimization steps (N =1000 steps), having more examples (1 -> 4) boosts CLIP Directional Similarity scores (0.113 -> 0.133). These scores indicate that the learned instruction is more aligned with the desired edit.
**What is the significance of using InstructPix2Pix as the network? [...]**
InstructPix2Pix [4] directly builds upon a pretrained Stable Diffusion model’s vast text-to-image generation capabilities, while further finetuning it with 450,000 (text instruction, before image, after image) triplets. Thus, its learned instruction space is already rich enough to cover many image-to-image translation edits, and is therefore a good starting point for our approach.
Another reason is that the architecture of InstructPix2Pix is a natural fit for our task. Exploring how to make our approach work for general text-to-image models (e.g. Stable Diffusion) would be a good future research direction.
**Is there any significance to the “tokens” learned in the optimized conditioning text embedding? That is, is it possible to interpret the optimized token in human-understandable tokens? [...]**
This is an interesting question!
We have tried to convert optimized tokens into natural tokens, however, we find that the natural tokens might not reflect the edit.
Inspired by [3], we mapped each learned tokens to their nearest tokens within the [CLIP vocabulary corpus](https://huggingface.co/timbrooks/instruct-pix2pix/raw/main/tokenizer/vocab.json).
For example, the human-understandable tokens in Figure 4 (Row 4, Column 2) is `ius souri anemone beans required throat alise eee`. These tokens are unrelated to the desired edit, which should be something similar to "Turn it into a watercolor painting."
Our findings align with previous empirical evidence [2], which suggests a disconnect between continuous prompts and their discrete interpretations. In [2], authors demonstrated that an accurate continuous prompt tailored for a specific task (e.g., describing a precise painting) could be projected into arbitrary or even irrelevant text/statements (e.g., code or a question).
**How does changing the number of learnable tokens affect the result? How was the choice of using 10 tokens made? [..]**
We employ [3] to initialize tokens (Section 5 - Initialization). In [3], 8 tokens are shown to be sufficient enough to yield a stable performance. Plus <|startoftext|> and <|endoftext|> tokens, we have 10 tokens as initialization.
We find that longer prompts do not necessarily produce better results (As demonstrated in Section 4.2 of [3]).
**[...] Were 300 editing directions used in the training of InstructPix2Pix?**
Yes, as the Clean-InstructPix2Pix dataset [4] does not come with a train/ test split.
However, there are many in-the-wild examples: All images in Figure 2 and Figure 4 are in-the-wild photos (not in InstructPix2Pix dataset).
Last row of Figure 3 is also in-the-wild. You can find further in-the-wild examples in the Supplementary (all Figure 1, Figure 2 (Row 1, 3-7), all Figure 4).
**The idea of the cosine direction to represent the editing direction has been used previously (e.g., StyleCLIP, StyleGAN-NADA). These works should be cited when introducing the CLIP-direction loss (Equation 3).**
Thank you for pointing it out. We have updated our paper accordingly.
*Reference:*
[1] Yaru et al., *Optimizing Prompts for Text-to-Image Generation*, arXiv 2022.
[2] Daniel et al., *Prompt Waywardness: The Curious Case of Discretized Interpretation of Continuous Prompts*, ACL 2022.
[3] Yuxin et al., *Hard Prompts Made Easy: Gradient-Based Discrete Optimization for Prompt Tuning and Discovery*, arXiv 2023.
[4] Tim et al., *InstructPix2Pix Learning to Follow Image Editing Instructions*, CVPR 2023.
---
Rebuttal Comment 1.1:
Title: Please check the author's responses
Comment: Dear Reviewer G8X8,
Could you go over the authors' responses, as well as the questions raised by the other reviewers?
Particularly, do you agree with the issues raised by the other reviewers? Do the authors' responses address the questions? Do you have any further questions for the authors?
Thanks, Your AC | Summary: This paper proposes a method for image editing via visual prompting. Given pairs of example that represent the “before”and “after” images of an edit, this framework can learn a text-based editing direction that can perform the same edit on the new images. Experimental results show the effectiveness of the proposed approach, even with one example pair.
Strengths: The proposed framework can learn an edit direction that can be applied on new images, even with one example pair.
Weaknesses: 1. The results in the first row of Fig. 3 have some artifacts in the hair and face; the results in the third row of Fig. 3 have the undesired changes in the background. Thus, the performance with only one example pair is not very good. The visual results in Fig. 4 are also not satisfactory. There should be a visual user study.
2. How to set the values of $\lambda_{mse}$ and $\lambda_{clip}$ for different example pairs? I wonder the robustness of the proposed methods towards different hyper-parameters. Moreover, The CLIP space cannot describe all texture changes, and we should consider the balance between the reconstruction and the CLIP guidance.
3. This framework is built based on InstructPix2Pix, which is trained with a number of “before” and “after” pairs. Thus, I wonder the performance when an example pair not in InstructPix2Pix’s training dataset is set as the condition.
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: Please answer the questions in the weakness section.
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 2 fair
Presentation: 3 good
Contribution: 2 fair
Limitations: The results of this paper is not sufficient, and more analysis about the parameters can further improve the contributions of this paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your constructive feedback! We address your questions/concerns below.
**The results in the first row of Fig. 3 have some artifacts in the hair and face; the results in the third row of Fig. 3 have the undesired changes in the background.**
We admit that the background can sometimes change (as in the Fig. 3 wolf -> dog result).
You can notice that other baselines have background changes as well (Figure 3, Row 3, Column 5-7).
We find that our results can be improved by more carefully selecting the noise.
How to automatically select better noise is an ongoing research direction [1,2]. (Please refer to Rebuttal PDF file, Figure 3 for more qualitative result).
It is worth noting that visual prompting is demonstrated to be sensitive to example pairs too [3,4]. In Figure 3 (Row 3, Column 1-2), there is snow in the background of the before-and-after pair. Thus, the learned instruction might have learned something related to “snow” to better capture the edit. We briefly discussed it in Section 6 (Discussion, Figure 7c), as there is an open question to understand what makes a good example pair.
**The performance with only one example pair is not very good. The visual results in Fig. 4 are also not satisfactory. There should be a visual user study.**
Without further clarification, we find the comments regarding Figure 4 difficult to respond to.
Figure 4 demonstrates that by showing an example pair, our method can learn and replicate the distinctive characteristics of each specific art style (drawing, painting) for the same text instruction “Turn it into a drawing/ painting”.
We perceive that the learned style is visually close to the given example pair.
Regarding the visual user study: we did consider conducting a user study.
However, we decided against it as *we felt it is not possible to guarantee a fair comparison* between our approach and the text-conditioned baselines.
For example, in Figure 3 (Second row), the edit prompt is "make it a watercolor painting". Without access to an additional before-and-after image as our approach, the outputs from all baselines appear equally valid (Column 5-6), as they successfully transform the test image into a "watercolor painting". However, they may miss certain aspects that users want from the inference image such as red colored flowers in the example (Column 2).
Conversely, it is also unfair to our approach (using only the before-and-after pair) as we lack knowledge of the exact edit provided by the text prompt. Using Figure 3 (Second row) as an example once more, not knowing the specific text prompt might lead to various interpretations (e.g., "colorize it", "make the coat pink", etc.). Thus it is unclear for us how to make a fair comparison (i.e., whether we should provide a reference image or instruction during the use study).
**How to set the values of $\lambda_{clip}$ and $\lambda_{mse}$ for different example pairs? I wonder the robustness of the proposed methods towards different hyper-parameters. Moreover, The CLIP space cannot describe all texture changes, and we should consider the balance between the reconstruction and the CLIP guidance.**
We mentioned in Section 4.1 (Line 170-173) that in all experiments, we set hyperparameters of $\lambda_{mse}$ and $\lambda_{clip}$ to 4 and 0.1, respectively. We find that this is generally sufficient to achieve good results, and that our method is not very sensitive to small changes in their exact values.
We do consider the image reconstruction by Image CLIP Similarity (similar to [5]), as shown in Figure 5. Image CLIP Similarity score indicates how similar the "test" and "edited" images are. In other words, it tells us how much the edited image looks different from the test image. Results show that our method performs similarly to other state-of-the-art models.
**This framework is built on InstructPix2Pix, which is trained with a number of “before” and “after” pairs. Thus, I wonder about the performance when an example pair not in InstructPix2Pix’s training dataset is set as the condition.**
We indeed have many in-the-wild examples!
All images in Figure 2 and Figure 4 are in-the-wild photos (not in InstructPix2Pix dataset). Last row of Figure 3 is also in-the-wild. You can find more in-the-wild examples in the Supplementary (Figure 1, Figure 2 (Row 1, 3-7), all Figure 4).
*Reference:*
[1] Clinton et al., *Interpolating between Images with Diffusion Models*, arXiv 2023. (Section 4.4)
[2] Bahjat et al., *Imagic: Text-Based Real Image Editing with Diffusion Models*, CVPR 2023. (Supplementary - Section B)
[3] Yuanhan et al., *What Makes Good Examples for Visual In-Context Learning?*, arXiv 2023.
[4] Yanpeng et al., *Exploring Effective Factors for Improving Visual In-Context Learning*, arXiv 2023.
[5] Tim et al., *InstructPix2Pix Learning to Follow Image Editing Instructions*, CVPR 2023.
---
Rebuttal Comment 1.1:
Title: Please check the authors' responses
Comment: Dear Reviewer vo15,
Could you go over the authors' responses, as well as the questions raised by the other reviewers?
It seems that the authors do provide various results in the paper and supplementary material to support their method, as well as the study on the usage of the hyper-parameters. Do these convince you? Do you have any further questions for the authors?
Thanks, Your AC
---
Rebuttal Comment 1.2:
Comment: Thanks for the rebuttal from the authors. After reading the rebuttal and the discussion from other reviewers, I think some of my concerns have been resolved. But I am still concerned about the quality of the results. Especially, I can not agree that the figures mentioned in fourth question are in-the-wild, I think the answers are subjective. So I decided to improve my score from borderline reject to borderline accept.
---
Reply to Comment 1.2.1:
Comment: Thank you for your constructive feedback!
Regarding the Figures mentioned in the fourth question: We meant that they are "in-the-wild" as they do not belong to the InstructPix2Pix's training data (as originally asked by the reviewer). Specifically:
* (1) The before images in Figure 2, Figure 3 (Last row), Figure 4 (Row 1-3), are randomly collected from the internet based on Google Search of random concepts (Links to original photos are provided in Supplementary, Line 78 - 88);
* (2) Figure 4 (Row 4-6) are images of one of the author's dog! | Summary: - This paper proposes a novel image editing method via visual prompts
- This paper introduces a new method to bind text-based transferring to a specific conversion between image pairs.
Strengths: - The paper is well written and easy to follow.
- The motivation is clear and reasonable.
- The proposed method of using image pairs to guide text directions is novel. It introduces a new method to bind text-based transferring to a specific conversion between image pairs, which can be useful.
- The effectiveness of the method is validated.
Weaknesses: - According to section 4.1, a key issue is that this work relies on existing pre-trained models to obtain high-quality paired images, I wonder will the quality heavily relied on the quality of these pre-trained models?
- The method may have limited application, since the style transfer has to be built based on existing image pairs. Which means a stable style transfer model is already available, the contribution of this work is to bind the specific transfer to a text prompt. However, during the practical application, a desired style transfer may not have available paired ground-truth images.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: - I wonder for one text prompt, how many pairs are required to train a stable directional prompt?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Please refer the weakness, the main limitation would be the availability of the image pairs during the real application.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your positive feedback! We address your questions/concerns below.
**According to section 4.1, a key issue is that this work relies on existing pre-trained models to obtain high-quality paired images, I wonder will the quality heavily relied on the quality of these pre-trained models?**
In practice, paired images can be provided by users. The paired examples (before-and-after) do not necessarily have to be generated by pretrained models.
For example, in Figure 1 (Row 2), it is a real roadmap and satellite image. In Figure 3 (Last row), it is a real human face and photoshopped version of it. Same with Figure 7c, before-and-after images are individually collected from the internet (e.g., [this reddit link](https://www.reddit.com/r/Frozen/comments/j4afdf/elsa_anna_kristoff_in_real_life/)). Full photo attributions can be found in the last section of the Supplementary material.
So the quality does not rely on pre-trained models.
**The method may have limited application, since the style transfer has to be built based on existing image pairs. Which means a stable style transfer model is already available, the contribution of this work is to bind the specific transfer to a text prompt.**
Style transfer is just one application of our method. Beyond style transfer, we can also perform other image editing tasks (ie. domain translation). For example:
- Figure 3 (Row 3): wolf <-> dog
- Figure 6 (Row 1): dog <-> fox
- Figure 4 (Supplementary, Row 4): add sunflowers, add roses, etc.
- Supplementary (index.html file, Case ID: 0282472): add deer
- Supplementary (index.html file, Case ID: 0055735): add fog
- Supplementary (index.html file, Case ID: 0369699: replace cliff with skyscraper
- … and more.
**During the practical application, a desired style transfer may not have available paired ground-truth images.**
There are many use cases where such paired ground-truth is available.
Recall our example in Introduction (Figure 1, Second row), imagine that you want to transform a roadmap image into an aerial one. In this case, you will only have to annotate one example, and our method can help you automatically apply that transformation to new images.
Another use case is *learning your specific drawing style*. Imagine that you have spent hours, days, or months to draw your cat, in your very own style. It’s very stunning, and now you want to draw again, using the same style, but now, your dog.
Instead of spending another hours, days, or months to start all over again, you can use our method to learn your very specific style… Then a new drawing will only be minutes away!
**I wonder for one text prompt, how many pairs are required to train a stable directional prompt?**
Although more pairs can help (as we show in Figure 2 in the Rebuttal PDF file), in general, only one before-and-after pair is sufficient to learn the edit direction (Table 1, "Fixed noise", Row 9-12).
---
Rebuttal Comment 1.1:
Title: Please check authors' responses
Comment: Dear Reviewer VA7A,
Could you go over the authors' responses, as well as the questions raised by the other reviewers?
Do the authors' responses about the limitation of relying image pairs convince you? Do you have any further questions for the authors?
Thanks, Your AC
---
Rebuttal Comment 1.2:
Comment: Thank you for the comprehensive reply. After reading the reply and other reviewers' comments, I will keep my score.
---
Reply to Comment 1.2.1:
Comment: Thank you for your time and effort in reviewing both our manuscript and rebuttal. | Summary: This paper investigates image editing via visual prompting useful when textual descriptions cannot describe desired edits. The proposed framework inverts visual prompts into editing instructions and learns directions in the text space of the pretrained instruct pix-to-pix model. This edit direction is learned from a pair of query and target images to generate desired image editing. Results suggest, one example is sufficient to learn such directions.
Strengths: - It is fascinating to see how visual cues can be translated into text-based editing directions, especially when the edits are difficult to convey through written instructions alone.
- The editing approach that is learned can be used to modify new test images with impressive accuracy. It is impressive that comparable results can be achieved even with just one example pair.
- Practical insights into image editing with diffusion models, including the potential to apply the same noise schedule for both training and testing is also interesting
Weaknesses: The paper has a significant weakness in that it fails to acknowledge seminal work on image analogies, specifically, the research conducted by Hertzmann et al. (2001) and their deep learning adaptation, deep image analogies (Liao et al., 2017). While the proposed approach shares similarities with the concept of image analogies, the use of instruct pix to pix with a similar analogy lacks novelty, except for the editing directions found in textual embedding space.
Furthermore, the paper neglects to mention other related works such as SINE: SINgle Image Editing with Text-to-Image Diffusion Models (CVPR 2023) which employs style transfer for image editing through a diffusion model as seen in Figure 11 and Section 4.4 of their paper similar to image prompting. Additionally, MIDMs: Matching Interleaved Diffusion Models for Exemplar-based Image Translation (AAAI 2023) is also absent in the current work. It is crucial to compare and discuss these related works for a comprehensive analysis, which is currently missing in the manuscript.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Is it possible to enhance the quality of editing directions by providing more examples (before and after images)? Also, what is the average time required to find a single editing direction?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: The paper needs to properly acknowledge previous work on image analogies and lacks originality in its approach. Other related works should be mentioned and compared for a comprehensive analysis, which is currently lacking.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for finding our paper "fascinating" and "practical". We answer your questions/concerns as below:
**The paper has a significant weakness in that it fails to acknowledge seminal work on image analogies, specifically, the research conducted by Hertzmann et al. (2001) and their deep learning adaptation, deep image analogies (Liao et al., 2017)... Other related works should be mentioned and compared for a comprehensive analysis, which is currently lacking.**
Thank you for your suggestion.
We agree that *the seminal image analogies paper should have been cited*.
We will include Image Analogies [1], Deep Image Analogies [2], and other related works in the Related Works section on Visual Prompting (Line 87-99).
However, *we do not think that we lack comprehensive analysis*.
We identify our work as being in the intersection of Visual Prompting (1) and Image Editing (2).
Thus, we conducted extensive comparisons against state-of-the-art frameworks in these two domains:
- (1) Visual Prompting: Our work is inspired by Visual Prompting via Image Inpainting (NeurIPS 2022) [3].
To some extent, we believe that this work can be viewed as a modern deep learning variant of Image Analogies.
So comparing to Visual Prompting can be represented as comparing to one of the most recent state-of-the-art work in Image Analogies.
- (2) Image Editing: We conducted quantitative and qualitative experiments on a handful of state-of-the-art baselines in image editing (e.g. InstructPix2Pix [4], SDEdit [5], Imagic [6], Null-text Inversion [7]).
Furthermore, we also presented comparisons between ours and Textual Inversion method [8], which can be found in the Supplementary material (Section A).
**While the proposed approach shares similarities with the concept of image analogies, the use of instruct pix to pix with a similar analogy lacks novelty, except for the editing directions found in textual embedding space. The paper lacks originality in its approach.**
We respectfully disagree (and we believe that other reviewers disagree too; e.g. [#VA7A](https://openreview.net/forum?id=l9BsCh8ikK¬eId=dVSnD0IySj_): "proposed method [...] is novel").
It is true that we build upon InstructPix2Pix [4], but InstructPix2Pix does not support visual prompting (it seems the reviewer also acknowledges this, per "It is fascinating to see how visual cues can be translated into text-based editing directions"?).
We have proposed a novel approach to enable image editing through visual prompting.
Regarding your comment “except for the editing directions found in textual embedding space”: we actually see this is a major contribution, as we are the first to do so, and empirically demonstrate it can lead to clear advantages in image editing especially when the desired edit is difficult to describe with language.
**Furthermore, the paper neglects to mention other related works such as SINE: SINgle Image Editing with Text-to-Image Diffusion Models (CVPR 2023) which employs style transfer for image editing through a diffusion model as seen in Figure 11 and Section 4.4 of their paper similar to image prompting. (Additionally, MIDMs: Matching Interleaved Diffusion Models for Exemplar-based Image Translation (AAAI 2023) is also absent in the current work. It is crucial to compare and discuss these related works for a comprehensive analysis, which is currently missing in the manuscript.)**
We are happy to include these papers in the Related Work section, but note that they are only loosely related and not closely related to our work.
The setting in Section 4.4 of SINE [9] is different from our visual prompting setting. Recall that we learn an edit from a before-and-after pair, then apply it to a test image. In contrast, in Section 4.4 of SINE, there is only a test image and a reference image (style transfer based on reference image). Thus, our setting is more general than SINE. Moreover, SINE requires finetuning the diffusion model again for each edit, while ours only needs to optimize one embedding vector.
Likewise, our work is also only loosely similar to the MIDMs [10] setting. MIDMs is designed for exemplar-based image translation, i.e., producing image $I_{xy}$ by combining content $I_x$ and style $I_y$. In contrast, our method aims to learn edits from before-and-after images (i.e., learning the transformation from $I_x$ to $I_y$, and then applying that learned transformation to another test image). Moreover, MIDMs framework is implemented in latent space (noise space), while ours is in text-space.
**What if more examples are provided?**
More examples improve the performance, as depicted in Table 1 (Last 4 rows). Result indicates that increasing the number of example pairs (1 -> 4) increases the Directional CLIP score (0.113 -> 0.133).
**What is the average time required to find a single editing direction?**
It takes roughly 7 minutes to find an editing direction. More implementation details can be found in the Supplementary material (Section C1).
*Reference:*
[1] Hertzmann et al., *Image analogies*, SIGGRAPH 2001.
[2] Liao et al., *Visual attribute transfer through deep image analogy*, SIGGRAPH 2017.
[3] Amir et al., *Visual Prompting via Image Editing*, NeurIPS 2022.
[4] Tim et al., *InstructPix2Pix Learning to Follow Image Editing Instructions*, CVPR 2023.
[5] Chenlin et al., *SDEdit: Guided Image Synthesis and Editing with Stochastic Differential Equations*, ICLR 2022.
[6] Bahjat et al., *Imagic: Text-Based Real Image Editing with Diffusion Models*, CVPR 2023.
[7] Ron et al., *Null-text Inversion for Editing Real Images using Guided Diffusion Models*, CVPR 2023.
[8] Rinon et al., *An Image is Worth One Word: Personalizing Text-to-Image Generation using Textual Inversion*, ICLR 2023.
[9] Zhixing et al., *SINE: SINgle Image Editing with Text-to-Image Diffusion Models*, CVPR 2023.
[10] Junyong et al., *MIDMs: Matching Interleaved Diffusion Models for Exemplar-based Image Translation*, AAAI 2023.
---
Rebuttal Comment 1.1:
Title: Please check authors' responses
Comment: Dear Reviewer h5yQ,
Could you go over the authors' responses, as well as the questions raised by the other reviewers?
Do the authors' responses convinces you, especially on the originality/novelty part? Do you have any further questions for the authors?
Thanks,
Your AC | Rebuttal 1:
Rebuttal: We propose a framework for *inverting visual prompts into editing instructions* for text-to-image diffusion models.
Furthermore, our method can combine instructions between learned and natural language, *yielding a hybrid editing instruction that is more precise*.
We are grateful that **all reviewers appreciate either the originality, experimentation, clarity, and/or significance** of our paper.
- **Originality**: "very interesting and important" (#uc51); "impressive" (#h5yQ), "clear and reasonable", (#VA7A), "simple, yet effective" (#G8X8), "can learn an edit ... even with one example pair." (#vo15).
- **Experimentation**: "thorough ablation study" (#uc51), "effectiveness is validated" (#VA7A).
- **Clarity**: excellent presentation (#G8X8), "well written" (#VA7A, #uc51), good (#h5yQ, #vo15).
- **Significance**: "fascinating" and "practical" (#h5yQ), "intuitive application" (#uc51), "useful" (#VA7A), "applicable and easy to use" (#G8X8).
---
We thank the reviewers’ time and effort in reviewing our paper.
We have incorporated reviewers' feedback into our revision.
We are glad that reviewer #G8X8 finds the hybrid instruction [*“very interesting and beneficial”*](https://openreview.net/forum?id=l9BsCh8ikK¬eId=tZfjSeHAms).
We will move this application into the main paper from the supplementary material as suggested.
Answers to individual reviewers are addressed below each review. Please let us know if you have any additional questions or concerns.
Pdf: /pdf/047a0c9750ea2e4ef91520868d5ef8d1476229ff.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: In this paper, the authors propose a new method that can perform visual prompting via a pair of exemplar images through a pretrained text-based instruction image editing model. The method introduced in this paper only requires optimizing over the text conditioning vector in order to perform visual instruction editing. The authors have shown strong qualitative results demonstrating various application scenarios.
Strengths: This model introduced in this paper solves a very interesting and important application problem that is exemplar-based visual prompting of image generative models. The paper is very well written and the results look very promising and this model can be easily directly applied in a large range of intuitive applications by daily users. It also leverages a pretrained model and only requires optimizing a vector, which makes it more appealing in applications. The authors have also performed thorough ablation study.
Weaknesses: 1. The quantitative evaluations are not very comprehensive. The authors only sampled 1k images to perform quantitative analysis, and did not report any fidelity scores. For editing controllability, they only report CLIP related scores, which is not very representative in many circumstances [1].
2. When explaining the method, the authors sometimes mix the prior work with their contributions. A prime example would be Section 3.2, which is redundant because it is the same as InstructPix2Pix, but sectioning in this way makes it look like a new contribution at the first glance. I would recommend the authors to make clear distinctions between their contributions and the prior literature.
[1] Tristan Thrush, Ryan Jiang, Max Bartolo, Amanpreet Singh, Adina Williams, Douwe Kiela, Candace Ross. “Winoground: Probing vision and language models for visio-linguistic compositionality”. CVPR 2022. https://arxiv.org/pdf/2204.03162.pdf
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. How similar does the exemplar visual prompt and query image should look? Can the authors give some additional ablation study on the similarity between the visual prompt and the query images?
2. How long does it take to sample one image?
3. Since CLIP is not a perfect encoder and has various known limitations, I am wondering how well this method handles tasks such as color changes or duplicating objects.
4. Related to Weakness (1), I think it is also possible to report more standardized metrics such as KID (fidelity score for small number of samples), IoU and (masked) LPIPS with tasks like semantic segmentation map to image and image compositing. Can the authors report some standardized metrics for some of these tasks?
5. Why do more examples hurt the performance (according to Table 1)?
~~I am happy to raise my score if the authors can add experiments to address my concerns.~~
***After the rebuttal discussion, I would like to raise my score from 5 to 6.***
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Although the imitations are extensively discussed in the paper, the discussion of potential ethical issues is missing. Since this paper uses a pretrained large image generative model, which is known to have various societal issues, I would highly recommend the authors to include a broader impact statement in their paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your positive feedback! We address your questions/concerns below.
**Only sampled 1k imgs to perform quantitative analysis**
We believe that ~1000 images are sufficient to validate our approach. This is in line with other related work; e.g., Imagic (CVPR 2023): ~100 image pairs; Null-text Inversion (CVPR 2023): ~1000 test images.
**[...] did not report any fidelity scores**
Thanks for your suggestion! Below, we present new experiment using the LPIPS score [1] for 25 in-the-wild image pairs.
To recap, we use a **before-and-after** image pair to learn an edit, then apply it to the **test** image to get an **edited** version.
We use LPIPS using two ways:
- LPIPS(test, edited): Measures the similarity between the test and edited image. Lower score indicates that the edited image has similar image fidelity to the test image.
- LPIPS(after, edited): Assesses how the edited image aligns with the after image. Lower score indicates that the edits successfully follow the visual cues provided by the after image.
||SDEdit [3]|Ip2p [4]|Ours|
|---|---|---|---|
|LPIPS (test, edited)|**31.34**|46.40| 43.73|
|LPIPS (after, edited)|71.42|71.15|**62.52**|
Results show that our model demonstrates competitive performance to state-of-the-art baselines. However, it is worth to note that while an edit may lead to significant changes, achieving lower LPIPS(test, edited) scores could indicate less meaningful edits, as the edited image closely resembles the original test image.
Furthermore, in LPIPS(after, edited) score, our model scores lowest. This suggests that our edited images align more perceptually with after images compared to text-conditioned methods. Note that the baselines does not have access to the before-and-after pair – However, this precisely supports the key idea of our approach; i.e., using before-and-after images enhances intended edits which may not sufficiently be captured with text alone.
It is worth highlighting that real dataset for visual prompting-based image editing is lacking. Substantial ongoing research efforts are dedicated to creating diverse and precise paired datasets [2]. As a result, we were unable to compute on a larger dataset.
**Standardized metrics for tasks like semantic segmentation**
As we focus on image editing, we did not report any downstream computer vision tasks (i.e., semantic segmentation, image compositing), which are beyond our scope.
**How similar does the exemplar visual prompt and query image should look?**
As long as the example reference images and the query image are roughly in the same domain, there are no constraints on how similar they should be.
For example, In Fig. 3 (Last row), different people are in example and test images. In Fig. 6 (First row), backgrounds and people in example and test images fully change.
On the other hand, for entirely different domains (e.g., dog->cat as example, landscape as query), our method will not work.
**How long does it take to sample one image?**
It takes 4 seconds to sample one image.
**Since CLIP is not a perfect encoder [...] how well this method handles color changes or duplicating objects.**
Qualitative results about color changing cases can be found in our main paper:
- Fig. 3 (Row 3: gray wolf -> brown dog)
- Fig. 6 (Row 2: green grass field -> brown grass field)
It can also be spotted several times in our Supplementary (index.html), e.g., ID: 0207894 (red -> blue hair)
For "duplicating objects" it seems like your question is: What happens when there are two visually similar objects in a test photo? Please refer to ID: 0307987 - boats -> camels (Supplementary, index.html) as an example. The results show that each boat shown in the test images will be transformed into a camel. For more results, please see ID 0258921 (cow -> giraffe), ID: 0016482 (human -> cheetah).
With all the cases we have listed above, we see that our approach works reasonably well with a CLIP encoder.
**Why more examples hurt the performance (Tab. 1)?**
More examples improves the performance. Recall that in diffusion-based models, different noises lead to different outputs (Fig. 6). Thus, in Tab. 1 (Column 6-8, "Random noises"), quantitative results can vary a lot.
However, we find that using the same noise sequence from training to test leads to more stable results (Sec. 5 - Noises). Based on this observation, we have more reliable quantitative results in Tab. 1 (Column 9-11, "Fixed noise"). In particular, more example pairs (1 -> 4) increase Directional CLIP score (0.113 -> 0.133).
**[...] the authors sometimes mix the prior work with their contributions. [...] Sec. 3.2 is redundant because it is the same as InstructPix2Pix [...]**
Thanks for your feedback. We do not want to cause any confusion here.
While we use the same image reconstruction loss (MSE loss) as in InstructPix2Pix [4], our goals differ.
- InstructPix2Pix finetunes the model: update $\epsilon_{\theta}$, fixed $c_T$
- Ours optimizes the instruction: fixed $\epsilon_{\theta}$, update $c_T$ (As presented in Algorithm 1, Line 14)
We will change Line 141:
“We ̶f̶o̶l̶l̶o̶w̶ ̶t̶h̶e̶ ̶s̶a̶m̶e̶ ̶s̶t̶r̶a̶t̶e̶g̶y̶ ̶a̶s̶ ̶[̶4̶]̶ employ a pretrained text-conditioned image editing model proposed in [4] [...]”
**Although the limitations are extensively discussed [...] discussion of potential ethical issues is missing.**
It is true that our method might inherit unwanted bias from diffusion models (As we briefly mentioned in Sec. 6, Line 256). We will further clarify potential ethical issues in our revision.
*Reference:*
[1] Richard et al., *The Unreasonable Effectiveness of Deep Features as a Perceptual Metric*, CVPR 2018
[2] Kai et al., *MagicBrush: A Manually Annotated Dataset for Instruction-Guided Image Editing*, arXiv 2023
[3] Chenlin et al., *SDEdit: Guided Image Synthesis and Editing with Stochastic Differential Equations*, ICLR 2022
[4] Tim et al., *InstructPix2Pix Learning to Follow Image Editing Instructions*, CVPR 2023
---
Rebuttal Comment 1.1:
Title: Please check authors' responses
Comment: Dear Reviewer uc51,
Could you go over the authors' responses, as well as the questions raised by the other reviewers?
Do the additional experiments the authors provided convince you? Do you have any further questions for the authors?
Thanks,
Your AC
---
Rebuttal Comment 1.2:
Title: Thank you for your response
Comment: I would like to thank the authors for responding to my concerns. Most of my questions and concerns have been answered and therefore I would like to raise my score to 6. However, I do want to clarify that
1. While many prior works have used very few images to evaluate FID, FID is only consistent and stable when we have more than 10k samples [1].
2. By semantic segmentation map to image task, I meant the edits to the images can be derived from semantic segmentation maps.
It would be interesting to see the authors to sample more images to evaluate FID and apply their method to the tasks I mentioned above in the future, but in my opinion this paper deserves an accept with the content provided so far in the paper, supplemantary materials and the rebuttal discussion.
[1] Mikołaj Bińkowski, Danica J. Sutherland, Michael Arbel, Arthur Gretton. Demystifying MMD GANs. ICLR 2018.
---
Reply to Comment 1.2.1:
Comment: Thank you for your clarification.
We will continue to look into your suggestions and report any interesting findings in the final version. | null | null | null | null | null | null |
Generative Category-level Object Pose Estimation via Diffusion Models | Accept (poster) | Summary: This paper proposes a novel approach for generative object pose estimation based on diffusion models.
Strengths: 1. Formulating pose estimation as a diffusion process is novel. It shows excellent ability to solve the multi-hypothesis issues in pose estimation caused by symmetry and partial observation. Moreover, it can be easily extended to solve the object pose tracking problem.
2. This paper proposes an efficient framework to estimate the object pose using diffusion models. It trains a score-based diffusion model to sample pose hypotheses, and an additional energy-based diffusion model to estimate the likehood of each hypothesis.
3. The paper is well written with nice figures, clear equations and proficient writing style.
4. Experimental results on several benchmarks demonstrate the efficiency of the proposed method.
Weaknesses: In Table-1, this paper conducts comparison with category-level object pose estimation methods that predict 9 DoF object pose, consisting of 3D rotation, 3D translation and 3D size. However, the 3D size is not considered by the proposed method. Though the metrics are focused on rotation and translation, it still could cause a potential unfair comparison.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: As a likehood is estimated for each pose hypothesis, maybe an alternative way to get the final estimation is fusing them with a weighting scheme. For example, could it be possible to design a weighted version of Eq-9?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations: The limitation of inference speed and future work is included in the last section of the paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > **(6D -> 9D)Q1: In Table-1, this paper conducts comparison with category-level object pose estimation methods that predict 9 DoF object pose, consisting of 3D rotation, 3D translation and 3D size. However, the 3D size is not considered by the proposed method. Though the metrics are focused on rotation and translation, it still could cause a potential unfair comparison.**
**A1:** Thanks for pointing it out! To clarify, although our method is primarily geared towards 6D object pose estimation, it can directly produce a 9D object pose when provided with a point cloud and segmentation mask. We first map the object point cloud to canonical space using its estimated 6D pose. Then, we refine the canonical point cloud using a standard outlier removal algorithm. Finally, we determine the 3D scales by computing the axis-aligned bounding box of the refined point cloud. The 9D object pose computation process is deployed in the real-world experiments found on our project page and in our supplementary video. We understand the importance of estimating 3D scales and are open to offering a more exhaustive evaluation if needed.
> **(Mean Pooling v.s. Weighted Mean Pooling)Q2: As a likehood is estimated for each pose hypothesis, maybe an alternative way to get the final estimation is fusing them with a weighting scheme. For example, could it be possible to design a weighted version of Eq-9?**
**A2:** Thanks for your insightful suggestions! We concede that there might be other aggregation techniques superior to simply averaging. Nonetheless, the primary concern in this work is dealing with outliers. The presence of outliers invariably skews aggregated results, regardless of the chosen method, be it weighed Mean Pooling or Mean Pooling. While some research addresses the multi-hypothesis challenge, they overlook this foundational problem. To our best knowledge, this is the first work that leverages energy-based diffusion model to remove outliers, which is also the key technical novelty of this work.
Nevertheless, we conducted experiments to compare mean-pooling with other aggregation methods (i.e., likelihood weighting). As shown in **Table 1** of the PDF, mean-pooling consistently outperforms weighted averaging. We hypothesize that the energy model is better at distinguishing poses with significant differences in errors but performs poorly when distinguishing poses with low errors. As a result, some poses with higher errors might overshadow those with lower errors in energy weighting. To validate this hypothesis, we further explored the relationship between the pose error and the energy output. Results in **Figure 1** of the PDF demonstrate that the energy model assigns similar (right, <2 cm), or even incorrect values (left, <10 degrees) for poses with low errors. Meanwhile, the energies of low-error poses (e.g., <10 degrees, <2 cm) are higher than those of high-error poses (e.g., >20 degrees, >4 cm).
---
Rebuttal Comment 1.1:
Title: Post-rebuttal comment
Comment: I appreciate the author's feedback, which addressed my concerns about how object size can be estimated on novel shapes, and how different aggregation scheme would perform with the proposed framework.
Overall, I agree with other reviewers that this paper has proposed an efficient generative object pose estimation method based on diffusion models with sufficient novelty to be accepted by NeurIPS.
---
Reply to Comment 1.1.1:
Title: Thank You!
Comment: We are so glad that our responses help address your concerns. Thanks again for all your valuable feedback! | Summary: This paper proposes a novel conditional generative approach for category-level pose estimation to solve the multi-hypothesis problem. The proposed method utilize the energy-based diffusion model to aggregate the candidates generated by the score-based diffusion model. Extensive experiments have been conducted to show the effectiveness of the proposed method on the REAL275 dataset under category-level pose estimation/tracking and novel category pose estimation evaluation protocols.
Strengths: 1. The author states the method clearly, and the notation is easy to follow.
2. The author proposed a novel multi-hypothesis method and shows promising results compared to regression and correspondence methods.
3. The ablation study is convincing.
Weaknesses: 1. I wonder if scoreNet and energyNet are needed for each class. Does a single model infer all classes?
2. I wonder about the inference time of object pose estimation. Is the object pose estimation inference time linearly increased according to the number of hypotheses?
3. Is there anything to consider about symmetries properties when augmentation in the training process?
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. If it is not the model that predicts all classes, but each model, how did you compare the parameters of the model (Table 1)?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Covered the limitation in the main paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > **Q1: I wonder if scoreNet and energyNet are needed for each class. Does a single model infer all classes?**
**A1:** Apologies for the confusion. To clarify, we only require one set of both the score and energy models for all classes. Notably, neither model is conditioned on the class label or categorical priors, such as a mean point cloud. We will revise the paper to make this point clearer.
> **Q2: I wonder the inference time of object pose estimation. Is the object pose estimation inference time linearly increased according to the number of hypotheses?**
**A2:** Thank you for raising this point. We concur that inference time is a critical concern. Due to the time-consuming sampling process of our diffusion model, our model achieves only 3FPS when handling the 6D object pose estimation based on single-frame images. However, to counter this challenge, we introduced a tracking algorithm. By leveraging well-initialized values from the previous frame, we significantly accelerate the sampling process, enabling the tracking frame rate to reach **18FPS**. Furthermore, we are actively exploring avenues to expedite the sampling process [1][2], marking an area of our future work.
The inference time for our single-frame object pose estimation doesn't scale linearly with the number of pose hypotheses. This is because the hypotheses are sampled in batch-wise parallel (on the GPU) using the reverse ODE process. Below, we list the inference times for varying numbers of pose hypotheses:
| | 1 | 10 | 20 | 30 | 40 | 50 |
|---|---|---|---|---|---|---|
| Single Frame Pose Estimation (s) | 0.231 | 0.285 | 0.292 | 0.301 | 0.316 | 0.328 |
| Pose Tracking (s) | 0.036 | 0.043 | 0.046 |0.049 | 0.052 | 0.054 |
[1] Yang Song, Prafulla Dhariwal, Mark Chen, and Ilya Sutskever. Consistency models. ICML 2023
[2] Tim Salimans and Jonathan Ho. Progressive distillation for fast sampling of diffusion models. ICLR 2022.
> **Q3: Is there any symmetric augmentation in the training process?**
**A3:** To clarify, we didn't incorporate specific designs, like tailored loss functions or symmetry-focused data augmentation, for symmetrical objects during training. Even so, we noted significant performance improvements, particularly with symmetric objects (as detailed in Sec 4.4 of the main text). We credit this success to our conditional generation formulation, which we believe provides valuable insights for the wider research community.
> **Q4: If it is not the model that predicts all classes, but each model, how did you compare the parameters of the model (Table 1)**
**A4:** As mentioned in **Q1**, we only train a single model to infer all categories in our paper.
**We hope our rebuttal could address your concerns. Please let us know whether you have further questions. We are sincerely waiting for discussion with you.**
---
Rebuttal Comment 1.1:
Title: Re: Rebuttal by Authors
Comment: Thanks for your comments and most of my concerns are addressed. I have a follow-up question.
1. As I understood proposed method work without conditioning the class label or priors and is not designed for specific class property (symmetries). It looks like It can be worked for training with bowl, bottle, and can classes since their point cloud has similar geometry shapes. If we consider all training classes including laptop and camera, and mug, I wonder if the overall performance of the bowl, and bottle, can are maintained or degraded.
---
Reply to Comment 1.1.1:
Comment: Thank you for your response. We believe that your major concern is the scalability of our approach which is trained on all the categories without being conditioned on canonical priors if we understood correctly. To address it:
- Firstly, our model has already been trained on all six categories and achieved SOTA performance, as illustrated in Table 1 in the manuscript.
- Furthermore, as you mentioned, we conducted additional ablation studies on training categories. However, the training of *Ours w/o bottle, bowl, and can* could not converge within the discussion period due to limited time. Alternatively, we excluded one category from 'bottle,' 'bowl,' or 'can' during training and reported the average performance on the 'camera,' 'laptop,' and 'mug' categories.
The results below demonstrate that the addition of a new category during training (referred to as *Ours w all categories*) **does not have a significant impact on the average performance of the existing categories** (i.e., 'camera,' 'laptop,' and 'mug'). This suggests the potential of our method to scale up to larger datasets.
***
| Training categories | $5^{\circ}2$cm | $5^{\circ}5$cm | $10^{\circ}2$cm | $10^{\circ}5$cm |
|---|---|---|---|---|
| *Ours w/o bottle* | 31.27 | 42.73 | 55.80 | 71.73 |
| *Ours w/o bowl* | 30.23 | 41.37 | 54.07 | 69.33 |
| *Ours w/o can* | 32.23 | 43.00 | 55.23 | 70.07 |
| *Ours w all categories* | 31.60 | 42.40 | 55.77 | 71.13 |
***
We remain open to further discussions and inquiries on this concern and are sincerely waiting for your response! | Summary: This paper mines and formulates ambiguity in the task of object pose estimation, proposing to use a diffuse generative model to generate multiple hypotheses, which are then aggregated through additional scoring and ranking by another scorer net. The model achieves significant improvements with less supervision and parameters, has good cross-category generalization results, and performs well on tracking and stimulation tasks.
Strengths: - I like the observation of the symmetric ambiguity and generative multi-hypothesis formulation.
- The improvement has been dramatically boosted, and it can beat all SOTAs even with fewer supervision signals and parameters.
- Thank the authors for testing the method on many datasets and tasks.
Weaknesses: - (Conditional generation) Adding objects as conditions should further improve in-distribution test performance, right?
(Redundancy of model designs) EnergyNet can also be used for sampling, so why is ScoreNet needed?
- (Model design choices) How about using other models instead of EnergyNet to estimate, e.g., normalizing flows can give exact likelihood? There has been much multi-hypothesis work in human pose estimation/motion generation [a-c]. Essentially, this framework does not explore or mine features for object pose estimation. In other words, this model can also be used in other fields, and models in other fields can also be used in this field. Regarding the problem setup, there seem to be no many special things about monocular estimation except the mentioned symmetric ambiguity; however, the dramatic improvements in straightforwardly applying generative models encourage us to think about the underlying principles.
- (Mean pooling aggregation) Why must mean pooling to be used for aggregation? It looks like the mode of the largest cluster corresponds to the most likely solution, which is probably better than the mean. Additionally, some other works also study empirical aggregations, e.g., clustering [a], weighting [b] and [c]. Is it because quaternions are not easy to work on? What about converting to other forms (e.g., coordinates)? I expect to see comparisons or authors’ discussions on this.
- (Less yet better?) Can you elaborate on your speculation for the L265-267? Why does your method require less supervision information and a lighter model yet achieve better results? This guess will be very interesting.
- (Best hypothesis) I don't quite understand the last line in the table in Tab. 3; why is mean pooling needed since GT is accessible? What did I miss?
- (Cross-category generalization) I think the authors should emphasize the limitations when claiming generalization gains. I don't particularly understand that even the generative model also has unstable, bad, and unreliable performance for OOD point cloud conditions that have not been seen in the training set. And comparing the baselines in Tab. 5 is unfair; it is better to let them also train and test on the same split. I hope the author could kindly elaborate on this.
- (Missing related works) Should add related work of multi-hypothesis generation [a-e].
- (Visualizations of multiple pose hypotheses) While there is a dimensionality reduction visualization in Fig. 3, it is suggested to add the predicted multiple pose hypotheses along with the estimated energies to illustrate the method's effectiveness qualitatively. I also hope that the author can provide visualizations of the results, showing the advantages and failures of your method compared to the baseline under different settings such as D and RGB-D (Sup. Fig. 2 is better to also show input depth and point cloud data). This helps readers better understand the proposed method.
- (Reproducibility) While this cannot be forced or required, if the author commits to publicizing the code, it will have a greater impact and be more helpful to the field.
References:
- [a] C. Li & G. Lee. Weakly supervised generative network for multiple 3D human pose hypotheses. BMVC’20.
- [b] B. Biggs et al. 3D multibodies: Fitting sets of plausible 3D models to ambiguous image data. NeurIPS’20.
- [c] W. Shan et al. Diffusion-based 3D human pose estimation with multi-hypothesis aggregation. CVPR’23.
- [d] G. Chliveros et al. Robust multi-hypothesis 3D object pose tracking. ICVS’13.
- [e] F. Michel et al. Global hypothesis generation for 6D object pose estimation. CVPR’17.
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: See Weaknesses.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 3 good
Limitations: Yeah.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > **Q1: why is mean pooling needed since GT is accessible? What did I miss?**
**A1:** To clarify, the ground truth (GT) is not accessible during test time. That's why we employ another energy-based diffusion model to aggregate these candidates into a final output in the absence of the GT.
In our experiments, we emphasize the performance without the GT label in Table 1 of the main text. Additionally, we assess the upper limit of our approach by evaluating the best candidate (i.e., the one closest to the GT), in line with what other multi-hypothesis studies do[1].
> **Q2: EnergyNet can also be used for sampling, so why is ScoreNet needed?**
**A2:** Thanks for the good question. There are two main reasons:
- The score-based model significantly outperforms the energy-based model in candidate generation. We conducted an ablation study to compare the performance of using the score and energy models for candidate generation, respectively. As shown in **Table 2** of the PDF, 'score + energy' markedly outperforms 'energy + energy' across all metrics. This might be due to the limited capabilities of using the 2nd-order derivatives of an energy model to parameterize the score function.
- Furthermore, using the energy-based model to sample candidates requires calculating the second-order derivatives, which is time-inefficient.
> **Q3: Why must mean pooling to be used for aggregation?**
**A3:** Thank you for your valuable suggestion. Due to the character limit, please refer to **Q3** in the Common Responses.
> **Q4: Why does your method require less supervision information and a lighter model yet achieve better results?**
**A4:** Thank you for highlighting this interesting point. We hypothesize that the performance gain arises from the enlarged computational graph induced by the denoising process. During inference, the pose candidates are generated from a denoising process that involves hundreds of inferences from the score network. Although the score network has only **2.2M** parameters, the expanded computational graph due to the denoising process can grow to encompass hundreds of millions of parameters.
> **Q5: This framework does not explore or mine features for object pose estimation. What is the underlying principles of the dramatic improvements?**
**A5:** Thank you for bring this up. Sorry for any confusion regarding our unique design features.
- (Special Design) Due to the character limit, please refer to **Q1** in Common Response.
- (Underlying Priciples) Moreover, we concur that understanding the principles behind the significant improvements is valuable. We hypothesize that these improvements might stem from the substantially expanded computational graph of the denoising process and the energy model's capacity to eliminate outliers. We delve into these hypotheses in **Q3** and **Q4**, respectively. We will incorporate these analyses in our subsequent revision.
> **Q6: The authors should emphasize the limitations when claiming generalization gains. Comparing the baselines in Tab. 5 is unfair**
**A6:** We apologize for the oversight in clarifying the limitations. The limitations in cross-category generalization stem from conditional generation, as statistical methods can't guarantee out-of-distribution (OOD) generalization. For instance, if trained on REAL275 without cameras, our method may not perform well on camera objects due to their distinct geometry. We'll update Sections 4.4 and 5 to highlight these limitations more clearly.
We are still retraining all the baselines using the same split as ours (see Table 5 of the PDF). We will update the final results as soon as possible during the discussion period.
> **Q7: Adding objects as conditions should further improve in-distribution test performance, right?**
**A7:** We agree that adding such prior (e.g., a 'mean point cloud') as input conditions might further improve the in-distribution test performance, which is also observed by previous caninocal-based methods.
Nonetheless, such design choice would also hurt the OOD performance when encountered with objects from novel categories, due to the large shape variations between the seen and unseen categories.
> **Q8: How about using other models instead of EnergyNet to estimate likelihoods, e.g., normalizing flows?**
**A8:** Thanks for the good question. We agree that flow-based model could also be an alternative approach for exact likelihood estimation. Nevertheless, normalizing flows have to use specialized architecture to build a normalized probability model, which limits their capability. Besides, estimating exact likelihood from normalizing flows requires calculating the determinant of Jacobian matrix of z to x mapping, which is time-consuming. On the other side, diffusion-based methods have achieved SOTA performance on NLL test[2].
In light of this, we choose to use EnergyNet to estimate the likelihood. We will include the rationale of choosing the likelihood estimator
> **Q9: Should add related work of multi-hypothesis generation [a-e].**
**A9:** Thanks for your advice! We will include these related works in the future revision.
> **Q10: Add the predicted multiple pose hypotheses along with the estimated energies. Showing the advantages and failures of your method**
**A10:** Thanks for the good suggestion! We promise to revise Fig.2 and Fig.3 accordingly and provide qualitative results of advatanges/failure cases in the future revision. However, due to the page limit of PDF, we do not demonstrate them at the rebuttal stage.
> **Q11: Releasing the codes will have a greater impact and be more helpful to the field.**
**A11:** We promise to release the code upon acceptance. We are also open to deploying our model on HuggingFace to validate its effectiveness, if required.
[1] Shan et al. Diffusion-Based 3D Human Pose Estimation with Multi-Hypothesis Aggregation.
[2] Song et al. Score-based generative modeling through stochastic differential equations.
---
Rebuttal 2:
Title: Updating the Cross-category Experiments' Result
Comment: Apologize for the late update. We have finished the training and compared the performance of our method and the baselines on cross-category generalization, employing the same training and testing split, in the table below. Results show that our method still outperforms the baselines in the OOD test significantly.
We hope this could address your concerns and **are sincerely waiting for your response.** If you have any further questions, please let us know and we will spare no effort to provide in-time responses.
***
| Category | Method | $5^{\circ}2$cm | $5^{\circ}5$cm | $10^{\circ}2$cm | $10^{\circ}5$cm |
|---|---|---|---|---|---|
| | SAR-Net[1] | 58.1/36.4 | 66.0/47.3 | 83.7/59.4 | 93.6/81.5 |
| bowl | RBP-Pose[2] | 75.4/0.0 | 81.7/6.9 | 92.1/0.1 | 100.0/30.7 |
| | Ours | **85.7**/**64.5** | **92.6**/**72.5** | **93.1**/**87.2** | **100.0**/**98.6** |
|---|---|---|---|---|---|
| | SAR-Net[1] | 43.5/11.7 | 54.0/23.0 | 61.3/33.6 | 79.8/68.0 |
| bottle | RBP-Pose[2] | 38.7/4.3 | 43.5/5.8 | 76.4/24.7 | 89.8/29.7 |
| | Ours | **53.6**/**39.0** | **62.0**/**53.2** | **81.4**/**73.6** | **92.7**/**94.6** |
|---|---|---|---|---|---|
| | SAR-Net[1] | 32.2/7.3 | 62.2/52.3 | 52.5/12.1 | 92.9/87.9 |
| can | RBP-Pose[2] | 53.5/0.8 | 67.1/21.0 | 78.8/2.6 | 96.3/61.7 |
| | Ours | **73.2**/**62.5** | **81.2**/**74.0** | **88.8**/**81.6** | **99.8**/**99.7** |
**Caption:** On the left side of the '/' are the results when all categories were included in the training, while on the right side of the '/' are the results when testing categories were excluded from training.
***
Notably, as [1] and [2] requires category priors, we provide them with category prior from the nearest related category: When tested on 'bottle', 'bowl', and 'can', [1] and [2] are provided with 'can', 'bottle', and 'bottle' priors, respectively.
[1] Lin H, et al. Sar-net: Shape alignment and recovery network for category-level 6d object pose and size estimation. CVPR, 2022.
[2] Zhang R, et al. RBP-Pose: Residual bounding box projection for category-level pose estimation. ECCV, 2022.
---
Rebuttal Comment 2.1:
Comment: I sincerely thank the author for his meticulous response to my concerns, which resolved many of them. I tend to improve my rating. In addition, if we can discuss the remaining concerns, I think it will be more conducive to improving this work.
(**Improvement understanding**) Personally, I am concerned about the explanation of diffusion models' extended computation graph. It would be even better if the author could provide some references to support it. I think the main takeaway from this work is the formulation of generative tasks, which allow generative models to model ambiguity well. Better generative models may lead to better performance. But as more conditions are added to reduce ambiguity, the advantage of generative models seems to diminish. I am looking forward to the authors' comments on my opinions.
(**Different choices of EnergyNet**) I still think it would make this work more complete if the authors could consider comparing performance differences between the current diffusion EnergyNet and commonly used normalized flows in a future version (not necessarily at this discussion stage).
(**Heuristic mean pooling**) I also want to comment on heuristic weighting. Thanks to the author for his in-depth research. The results show that EnergyNet is not well calibrated in rotation (i.e., lower errors should have higher energy). This is why more reasonable designs are not good, such as weighting pooling by energy. And this kind of trustworthy-related problem and phenomenon deserves attention and in-depth study.
(**Better performance with less supervision**) Since the author also said in R7 that adding category priors can help improve the performance of in-distribution testing, can the author further guide me about the understanding of your better performance with less supervision (depth or category prior) compared to existing work (L265)? Is that because using EnergyNet helps remove outliers to make the prediction more accurate? If so, does that mean the benefits from removing outliers are larger than those from more supervision?
(**Good OOD generalization understanding**) I still don't understand its rationale, such as why the model is not trained on the bowl category but can generalize well to the OOD bowl category. The in-distribution generalization MLE objective for generative model training does not account for the good generalization achieved.
(**ScoreNet vs. EnergyNet**) If I understand the point in the paper [35] correctly, they say that it doesn't matter if the EBM is not normalized, but factors like architecture do. So that's why I don't understand your experimental results on the different generative performances of ScoreNet and EnergyNet with the same architecture in your case. Can you elaborate?
---
Reply to Comment 2.1.1:
Comment: Thank you for providing us with your valuable and well-structured feedback, and for also improving the rating! We sincerely appreciate the in-depth and insightful discussion we've had with you and the other reviewers. Furthermore, we assure you that in future revisions, we will make an effort to incorporate additional results into the main text or appendix during the discussion period. **We firmly believe that your valuable suggestions and questions contribute significantly to the improvement of this work,** and we are more than willing to address your follow-up questions:
***
> **Q1: (Improvement understanding)**:
**A1:** Our explanation of the extended computational graph can be found in [1], specifically in the second paragraph of the Introduction. The first author of [1] is the pioneer of the deep score-based diffusion model [2, 3]. Furthermore, we have performed additional experiments to bolster this explanation. These results are presented in Table 4 of the PDF, where performance consistently improves with an increasing number of sampling steps.
Additionally, we believe that the ambiguity arising from partial observations would persist, no matter how many conditions are added. For instance, if the handle of a cup remains invisible within the current view, the introduction of RGB images or canonical priors would not eliminate the pose ambiguity of the cup. Therefore, we are of the opinion that the inclusion of further conditions would not diminish the advantages of the generative models.
[1] Consistency Models, ICML 2023
[2] Generative Modeling by Estimating Gradients of the Data Distribution, NeurIPS 2020
[3] Score-based Generative Modeling Through Stochastic Differential Equations, ICLR 2021
> **Q2: (Better performance with less supervision)**:
**A2:** Thanks for your insightful questions! We believe that our approach achieves improved performance with reduced supervision, not primarily due to energy-based outlier removal, but rather because of the effectiveness of our generative formulation in modeling ambiguities and the decision to utilize diffusion models. The results presented in Table 3 of our manuscript, using the *random ranker*, demonstrate that while energy-based outlier removal contributes to the enhanced performance of our method, it is not the primary factor driving the significant performance improvements we have achieved. Nonetheless, we recognize that leveraging additional sources of supervision, such as the RGB images or canonical priors, constitutes a promising future direction that aligns well with our current approach.
> **Q3: (Good OOD generalization understanding)**:
**A3:** We appreciate you bringing this to our attention. We hypothesize that the specific out-of-distribution (OOD) generalization ability of our method arises from the learned feature space of the point cloud. Although the bowl category is considered OOD, the extracted features from point clouds in the bowl category may exhibit similarities or closeness to seen categories, to some extent. In other words, a bowl may share visual characteristics with items such as cans, bottles, or mugs, as identified by the PointNet of the ScoreNet.
**To validate this hypothesis, we conducted a t-SNE[4] analysis on the point cloud feature space of the ScoreNet.** Specifically, we employed the PointNet of the ScoreNet trained on five categories excluding bowls as a feature extractor. Subsequently, we extracted features from the point clouds of objects in the test set and visualized the t-SNE results. The outcomes of the t-SNE analysis (w/o bowl), along with representative CAD models of bowls, cans, and bottles, are depicted on the top of our anonymous website (accessible via the link provided in the abstract) within the manuscript. Additionally, we also include the same t-SNE test on w/o can and w/o bottle.
The results demonstrate that features from cans and bottles tend to intermingle, aligning with the accurate observation that both cans and bottles exhibit symmetrical cylindrical shapes. Meanwhile, features from the bowl category show proximity to features from mugs. This phenomenon indicates a degree of similarity between bowls and other training categories within the feature space.
[4] "Visualizing Data using t-SNE," JMLR 2008"
Title: Reply to the followup concerns Part[1/2] | Summary: To settle the multi-hypothesis issue in category-level 6D pose estimation, this paper formulates the focused task as conditional generative modeling and proposes a novel method based on diffusion models, which utilizes a score-based diffusion model to sample pose candidates with an energy-based one followed to rank those candidates. The proposed method achieves the state-of-the-art results on REAL275 dataset for category-level pose estimation, and is further extended for the task of pose tracking.
Strengths: - The authors propose a new perspective for category-level 6D pose estimation by formulating the task as conditional generative modeling and realizing it via diffusion models.
- The proposed method achieves the state-of-the-art results on REAL275 dataset for both tasks of category-level pose estimation and pose tracking. Cross-category experiments and real-world applications are also conducted to verify its generalization ability.
- The paper is well written and presented with detailed illustrations and thorough experiments.
Weaknesses: - The proposed method is not competitive in terms of inference time with the use of generative models.
- The original definition of category-level task includes the estimation of object sizes, which could not be learned in the proposed method.
- Some methods are not compared with in Table 1 for pose estimation, e.g., [1][2][3], and Table 6 for pose tracking, e.g., [1][4][5].
Reference:
[1] Sparse Steerable Convolutions: An Efficient Learning of SE(3)-equivariant Features for Estimation and Tracking of Object Poses in 3d Space.
[2] CenterSnap: Single-Shot Multi-Object 3D Shape Reconstruction and Categorical 6D Pose and Size Estimation.
[3] ShAPO: Implicit Representations for Multi-Object Shape, Appearance and Pose Optimization.
[4] ICK-Track: A Category-Level 6-DoF Pose Tracker Using Inter-Frame Consistent Keypoints for Aerial Manipulation.
[5] BundleTrack: 6D Pose Tracking for Novel Objects without Instance or Category-Level 3D Models.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: - What’s the special designs of the proposed method for category-level pose estimation, compared to instance-level pose estimation? Could it be applied to the instance-level task?
- Some typos.
- ’w‘ in Eq. (1) and $\dot{\sigma}$ in Eq. (4) are not explained.
- In Line167, should '$log p_t(p|O)$' be '$\Delta_p log p_t(p|O)$'?
- In Line213, "symmetric objects ??" -> "symmetric objects".
- In Line278,281,284, should all the 'M' be K"?
- In References, paper [12] and paper [13] refer to the same papers.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 3 good
Limitations: The authors have discussed the limitations of their work in the paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > **Q1: The proposed method is not competitive in terms of inference time.**
**A1:** Thanks for pointing it out! We acknowledge that the sampling process of the diffusion model does introduce a notable computational overhead. Consequently, our current approach for estimating 6D object pose from individual images does indeed lack competitive inference speed.To address this concern, we have demonstrated within the main body of our paper the adaptation of our method into a final framework for 6D object pose estimation, achieving a swift execution rate (18FPS) while maintaining high performance. Additionally, we are actively exploring a promising avenue for further research, which involves expediting the sampling process [1][2]. This area constitutes our future work.
> **Q2: Object sizes could not be learned in the proposed method.**
**A2:** Thanks for brining it up! To clarify, although our method is primarily geared towards 6D object pose estimation, it can directly produce a 9D object pose when provided with a point cloud and segmentation mask. We first map the object point cloud to canonical space using its estimated 6D pose. Then, we refine the canonical point cloud using a standard outlier removal algorithm. Finally, we determine the 3D scales by computing the axis-aligned bounding box of the refined point cloud. The 9D object pose computation process is deployed in the real-world experiments found on our project page and in our supplementary video. We understand the importance of estimating 3D scales and are open to offering a more exhaustive evaluation if needed.
> **Q3: Some methods are not compared with in Table 1 for pose estimation, e.g., [1][2][3], and Table 6 for pose tracking, e.g., [1][4][5].**
**A3:** We sincerely appreciate your valuable feedback, which has shed light on additional baseline methods. According to your suggestion, we have meticulously presented a comparative analysis between our proposed approach and the methods referenced in your comments.
Regarding the primary task of category-level 6D object pose estimation, our method maintain a substantial lead in terms of performance:
| Method | $5^{\circ}2$cm$\uparrow$ | $5^{\circ}5$cm$\uparrow$ | $10^{\circ}2$cm$\uparrow$ | $10^{\circ}5$cm$\uparrow$ |
|---|---|---|---|---|
| [3] | 36.6 | 43.4 | 52.6 | 63.5 |
| CenterSnap[4] | - | 29.1 | - | 64.3 |
| ShAPO[5] | - | 48.8 | - | 66.8 |
| Ours | **52.4** | **61.2** | **72.8**| **84.2** |
For object tracking task, despite being directly transferred from a single-image prediction method, our approach has achieved comparable performance with the baselines:
|Method| $5^{\circ}5$cm$\uparrow$ | $r_{error}\downarrow$ | $t_{error}\downarrow$ |
|---|---|---|---|
| [3] | 54.5 | 5.2 | 1.9 |
| ICK-Track[6] | **84.4** | 4.5 | 3.1 |
| BundleTrack[8] w/o Pose Graph | 39.9 | 9.2 | 2.4 |
| Ours | 71.5 | **4.2** | **1.5** |
| BundleTrack[7] | 87.4 | 2.4 | 2.1 |
Notably, the bundle track method achieved enhanced performance by employing multi-frame images for global optimization. However, it is noteworthy that this procedure holds generality. When subjected to a fair comparison utilizing only a pair of image frames, our approach demonstrates superiority over the bundle track method.
We intend to include these particular baseline comparisons, as well as the references, in the revised version of our manuscript.
> **Q4: What’s the special designs of the proposed method for category-level pose estimation, compared to instance-level pose estimation? Could it be applied to the instance-level task?**
**A4:** Thank you for addressing this. For details on the special design, please refer to Q1 in the Common Responses. Our framework is specifically designed to address the multi-hypothesis challenge in category-level pose estimation. However, it can also be adapted for instance-level tasks by conditioning on the target object's CAD model. It's worth noting that this adapted method might not offer significant benefits over other instance-level methods.
### Thank you for the detailed response regarding typos. We will revise the paper accordingly. Below are the detailed explanations:
> **T1: 'w' in Eq. (1) and \sigma in Eq. (4) are not explained.**
**A1:** The $dw$ is the standard Wiener process[3] (a.k.a., Brownian motion). The $\dot{\sigma}(t)$ is the derivation of $\sigma(t)$: $\dot{\sigma}(t) = \sigma_{min} \cdot (\ln \sigma_{max} -\ln \sigma_{min}) (\frac{\sigma_{max}}{\sigma_{min}})^t$
> **T2: In line 167, should `logp_t(p|O)' be \delta `log p_t(p|O)'?**
**A2:** Yes, it shoud be $\delta_{p} \log p_t(p|O)$.
> **T3: In Line278,281,284, should all the 'M' be K"?**
**A3:** Yes! It should be $K$.
[1] Song Y, et al. Consistency models. ICML 2023.
[2] Salimans T, et al. Progressive distillation for fast sampling of diffusion models. ICLR 2022.
[3] Lin J, et al. Sparse Steerable Convolutions: An Efficient Learning of SE(3)-equivariant Features for Estimation and Tracking of Object Poses in 3d Space. Advances in Neural Information Processing Systems, 2021.
[4] Irshad M Z, et al. CenterSnap: Single-Shot Multi-Object 3D Shape Reconstruction and Categorical 6D Pose and Size Estimation. ICRA 2022.
[5] Irshad M Z, et al. ShAPO: Implicit Representations for Multi-Object Shape, Appearance and Pose Optimization. ECCV, 2022.
[6] Sun J, et al. ICK-Track: A Category-Level 6-DoF Pose Tracker Using Inter-Frame Consistent Keypoints for Aerial Manipulation. IROS, 2022.
[7] Wen B, et al. BundleTrack: 6D Pose Tracking for Novel Objects without Instance or Category-Level 3D Models. IROS, 2021. | Rebuttal 1:
Rebuttal: We are grateful to all reviewers for recognizing the merit of our idea, experiments, and presentation:
- "I appreciate the observation of symmetric ambiguity and the generative multi-hypothesis formulation." (W8Ay)
- "The improvement has seen a significant boost." (W8Ay)
- "This surpasses the performance of prior approaches by a considerable margin." (7PCs)
- "We've also conducted cross-category experiments and real-world applications to validate its generalizability." (4bwh)
- "The paper is meticulously written with comprehensive illustrations and thorough experiments." (4bwh)
**(Q1: Special Design)** However, we notice that some reviewers may be confused about our special design on object estimation tasks (**4bwh**, **W8Ay**). In this work, we address the multi-hypothesis issue, which is one of the **key features** of monocular object pose estimation. Moreover, to eliminate outliers resulting from conditional generation, our **key design** feature is the use of an energy-based diffusion model to filter them out. To highlight this primary objective, our networks consist solely of commonly used architectures. This aspect notably paves the way for future explorations of specialized architectural designs within our framework. We will clarify this further in our revised introduction.
| | Previous works | This work |
| --- | --- | ---|
| Training Paradigm | Regression-based | Conditional Generative Modeling|
| Key Challenge | Handling Intra-class Variations | Removing Outliers |
**(Q2: 3D Scale Estimation)** We also observed feedback from reviewers noting our omission of 3D scale estimation in the primary text (**7PCs**, **4bwh**, **sAr7**). To clarify, although our method is primarily geared towards 6D object pose estimation, it can directly produce a 9D object pose when provided with a point cloud and segmentation mask. We first map the object point cloud to canonical space using its estimated 6D pose. Then, we refine the canonical point cloud using a standard outlier removal algorithm. Finally, we determine the 3D scales by computing the axis-aligned bounding box of the refined point cloud. The 9D object pose computation process is deployed in the real-world experiments found on our project page and in our supplementary video. We understand the importance of estimating 3D scales and are open to offering a more exhaustive evaluation if needed.
**(Q3: Alternative Aggregation Methods)** Additionally, some reviewers have suggested alternative aggregation methods like weighted mean-pooling (**7PCs**, **W8Ay**, **sAr7**). We concede that there might be other aggregation techniques superior to simply averaging. Nonetheless, the primary concern in this work is dealing with outliers. The presence of outliers invariably skews aggregated results, regardless of the chosen method, be it K-means or Mean Pooling. While some research addresses the multi-hypothesis challenge, they overlook this foundational problem. To our best knowledge, this is the first work that leverages energy-based diffusion model to remove outliers, which is also the key technical novelty of this work.
To address reviewers' concerns, we conducted experiments to compare mean-pooling with other aggregation methods (i.e., likelihood weighting). As shown in **Table 1** of the PDF, mean-pooling consistently outperforms weighted averaging. We hypothesize that the energy model is better at distinguishing poses with significant differences in errors but performs poorly when distinguishing poses with low errors. As a result, some poses with higher errors might overshadow those with lower errors in energy weighting. To validate this hypothesis, we further explored the relationship between the pose error and the energy output. Results in **Figure 1** of the PDF demonstrate that the energy model assigns similar (right, <2 cm), or even incorrect values (left, <10 degrees) for poses with low errors. Meanwhile, the energies of low-error poses (e.g., <10 degrees, <2 cm) are higher than those of high-error poses (e.g., >20 degrees, >4 cm).
We sincerely hope our work contributes to the Machine Learning + 3D Vision research community. Below we reply to reviewers’ questions point-by-point. Thanks again for your valuable comments and suggestions!
Pdf: /pdf/9415bfdf7413af202ed6fea30aff51cbdde50ead.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: This paper proposes a diffusion-based model for category-level object pose estimation. Different from the existing deterministic approaches which treat the object pose estimation as a regression problem, the proposed diffusion model alternatively formulated it as a generation problem. In this way, it could tackle the multiple pose hypotheses issue comes from symmetric objects and partial observation. The proposed approach achieves state-of-the-art performance on existing benchmarks and could generalize to unseen symmetric categories.
Strengths: a. The paper introduces a novel diffusion-based framework for category-level pose estimation. The framework consists of two diffusion models. The first diffusion model generates a set of pose candidates for a given point cloud during the inference stage. The second diffusion model uses an energy-based approach to rank the candidates and filter out those with low rankings. This paper claims to be the first to propose a solution for pose estimation using diffusion models.
b. The diffusion-based model presented in the paper demonstrates state-of-the-art performance on existing benchmarks. It surpasses the performance of previous approaches by a significant margin. Additionally, the authors argue that their framework has the potential to generalize to objects of unseen categories that have not been studied by other approaches.
Weaknesses: a. The chosen representation of the pose parameter in this paper consists of a 9-dimensional vector, where 6 dimensions are allocated for rotation and 3 dimensions for translation. Notably, the omission of the scale parameter from this representation raises an important inquiry regarding the reasons behind its exclusion and the means by which it can be obtained during inference. Further clarification is required to address these concerns.
b. The current approach employed by the authors involves utilizing mean pooling to derive the final pose estimation from the filtered pose candidates. Given that the energy-based diffusion model is capable of estimating the data likelihood for each pose candidate, an alternative consideration arises: would adopting weighted averaging be a more suitable approach for calculating the final pose estimation? It would be insightful to explore the potential benefits of incorporating weighted averaging as a means to enhance the accuracy of the pose estimation.
c. Additional information clarifying the specifics of the pose tracking implementation would be appreciated. Considering that the framework incorporates two diffusion models, it appears intriguing for me to comprehend how the pose tracking component can maintain such computational efficiency and deliver results in real-time.
d. Some typos. (Line 213)
Technical Quality: 4 excellent
Clarity: 3 good
Questions for Authors: Some questions are listed in the weakness section.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 3 good
Contribution: 3 good
Limitations: The authors have discussed the limitations of this work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > **Q1: inquiry regarding the reasons about exclusion of scale parameters and the means by which it can be obtained during inference.**
**A1:** Thanks for pointing this out! We would like to clarify that our work primarily focus on the estimation of a 6D object pose. Nonetheless, our method possesses the capability to derive a 9D object pose given the point cloud and the segmentation mask. This process involves several key steps:
- Transforming the object's point cloud into the canonical space, utilizing estimated 6D object pose.
- Denoising the caninocal point cloud via an off-the-shelf outlier removal algorithm.
- Calculating the axis-aligned bounding box of the denoised point cloud to get the object size.
And in our real world experiments (can be seen on the project page or supplementary video), the 9D object pose is calculated via the aforementioned method and be used for robotic manipulation. We acknowledge that estimating 3D sizes is also important and willing to provide more comprehensive evaluations if required.
> **Q2: would adopting weighted averaging be a more suitable approach for aggregation?**
**A2:** Thanks for your insightful suggestion! We concede that there might be other aggregation techniques superior to simply averaging. Nonetheless, the primary concern in this work is dealing with outliers. The presence of outliers invariably skews aggregated results, regardless of the chosen method, be it K-means or Mean Pooling. While some research addresses the multi-hypothesis challenge, they overlook this foundational problem. To our best knowledge, this is the first work that leverages energy-based diffusion model to remove outliers, which is also the key technical novelty of this work.
We conducted experiments to compare mean-pooling with other aggregation methods (i.e., likelihood weighting). As shown in **Table 1** of the PDF, mean-pooling consistently outperforms weighted averaging. We hypothesize that the energy model is better at distinguishing poses with significant differences in errors but performs poorly when distinguishing poses with low errors. As a result, some poses with higher errors might overshadow those with lower errors in energy weighting. To validate this hypothesis, we further explored the relationship between the pose error and the energy output. Results in **Figure 1** of the PDF demonstrate that the energy model assigns similar (right, <2 cm), or even incorrect values (left, <10 degrees) for poses with low errors. Meanwhile, the energies of low-error poses (e.g., <10 degrees, <2 cm) are higher than those of high-error poses (e.g., >20 degrees, >4 cm).
> **Q3: clarifying the specifics of the pose tracking implementation.**
**A3:**
We apologize for any confusion caused. We have summarized our tracking algorithm in **Algorithm 1** of the PDF for your reference. Our algorithm is adept at maintaining both computational efficiency and high performance. This is primarily attributed to the benefit of warm-starting from previous predictions. Since the difference in object positions between adjacent frames is typically minimal, the ODE process requires fewer time steps to arrive at a reliable estimation from the previous one, thereby boosting efficiency.
> **Q4: Some typos. (Line 213)**
**A4:** Thanks for pointing out the typos, and we will address all the typos in the revised version.
---
Rebuttal Comment 1.1:
Title: Re: Rebuttal by Authors
Comment: Thanks for your feedback which addressed most of my concerns. After reviewing the comments from other reviewers, I think this paper proposes a diffusion-based method for 6D pose estimation is quite novel and effective. I will raise my original rating.
---
Reply to Comment 1.1.1:
Title: Thank You!
Comment: Thanks for raising your rating to 7. We are so glad that our responses help address your concerns. Thanks again for all your valuable feedback! | null | null | null | null | null | null |
Practical Contextual Bandits with Feedback Graphs | Accept (poster) | Summary: This paper studies online learning in a contextual setting when a feedback graph determines the feedback received by the learner. Namely, the learner observes all the losses experienced by the actions in the graph-neighborhood of the action it played.
Feedback graphs are a well-known feedback model for online learning, and their study in the contextual setting is natural. The paper's main result is a general framework (Theorem 3.1) that reduces the problem to the solution at each time step of a convex program. Then the authors instantiate this general theorem for the strongly observable and weakly observable case, obtaining the same regret rates that characterize the non-contextual version of the problem.
The primary technical tool of the paper is the parameter defined in equation (3). Following Foster et al. 2021, the authors use this new formula instead of the bandit one (equation 1) to prove their results.
Strengths: - The problem studied is natural and well-motivated.
- The regret bounds match the lower bounds in Alon et al., 2015, so they are tight.
- The experimental results and the theoretical guarantees support the claim that a richer feedback structure improves the regret bounds w.r.t. the bandit case.
- It is nice that the authors devoted some time to presenting the detailed results for some famous classes of feedback graphs.
Weaknesses: - Once equation 3 is designed, the rest of the paper seems incremental to Foster et al.
- Equation 3 needs knowledge of the feedback graph. Therefore the paper works only in the informed setting.
- The introductory model entails time-varying feedback graphs, while the results for strong and weakly observable feedback graphs need the graph to be deterministic and known up-front.
Minor comment:
- uniform the \citet and \citep notation
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: None
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: See above
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the valuable comments. We address the issues you mentioned as follows.
**1. Once equation 3 is designed, the rest of the paper seems incremental to Foster et al.**
We argue that achieving the tight regret bound with respect to the correct graph-theoretic dependence for different types of feedback graphs requires non-trivial analysis. We refer the reviewer to our response to Reviewer towH for details.
**2. Equation 3 needs knowledge of the feedback graph. Therefore the paper works only in the informed setting.**
Yes, our work focuses on the informed graph feedback setting and we leave the uninformed setting as the future work.
**3. The introductory model entails time-varying feedback graphs, while the results for strong and weakly observable feedback graphs need the graph to be deterministic and known up-front.**
We allow time-varying feedback graphs but the feedback graph needs to be informed at the beginning of each round. Our algorithm is defined for stochastic graphs, but our analysis focuses on the deterministic graph case in order to create correspondence with classic (non-contextual) minimax bounds for bandits with deterministic feedback graphs.
We will fix the citation notations you mentioned in the next revision. | Summary: The authors consider the adversarial contextual bandit problem with feedback graphs, in the finite function class setting with a realizability assumption, with an access to an online regression oracle. They extend previous approaches for the vanilla contextual MAB setting in order to obtain regret bounds of $\tilde{O}(\sqrt{\alpha T})$ with fully observable graphs and $\tilde{O}(d^{\frac13} T^{\frac23})$ for weakly observable graphs, where $\alpha$ and $d$ denote the graph's independence number and weak domination number respectively. The authors demonstrate their results empirically and show that their algorithm performs better than existing approaches when side observations are available.
Strengths: * The authors exhibit the first efficient algorithm for contextual bandits with feedback graphs, which obtains near-optimal regret guarantees both in the strongly observable and weakly observable settings.
* The techniques utilized by the authors seem to neatly generalize the approach of Foster et al. ('21) to the feedback graph setting, by generalizing the inverse gap technique to solving a convex problem together with utilizing a combinatorial property of the feedback graphs.
* The authors demonstrate in several empirical experiments that their algorithm out-performs SquareCB when feedback graphs are present, thus strengthening the idea that better performance can be obtained when side observations are available.
Weaknesses: * This is not a very major issue, but the algorithm suggested by the authors requires knowledge of the feedback graphs' independence number (or a good bound on it), which is a hard quantity to compute in general. This is only a minor point because many previous works in the feedback graphs literature also require knowing such a parameter, but I will remark that some results can be obtained without this knowledge.
Technical Quality: 4 excellent
Clarity: 3 good
Questions for Authors: Given the result presented in the paper, a very natural question that arises is whether or not for stochastic contexts and losses, the approach of [2] can be generalized to the feedback setting in order to obtain similar regret bounds, but with access to an **offline** regression oracle. I think such a result would be also very interesting, and I would expect that similar approaches to those that the authors used in this paper would work for extending [2], as their approach also extends SquareCB of [1]. I'd appreciate it if the authors could comment on whether or not they considered the stochastic setting as well, and if they think that they're approach could be used in order to obtain results with an offline oracle.
[1] Foster, Dylan, and Alexander Rakhlin. "Beyond ucb: Optimal and efficient contextual bandits with regression oracles." _International Conference on Machine Learning_. PMLR, 2020.
[2] Simchi-Levi, David, and Yunzong Xu. "Bypassing the monster: A faster and simpler optimal algorithm for contextual bandits under realizability." _Mathematics of Operations Research_ 47.3 (2022): 1904-1931.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 3 good
Contribution: 3 good
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the valuable comments. We address the issues you mentioned as follows.
**1. This is not a very major issue, but the algorithm suggested by the authors requires knowledge of the feedback graphs' independence number (or a good bound on it), which is a hard quantity to compute in general. This is only a minor point because many previous works in the feedback graph literature also require knowing such a parameter, but I will remark that some results can be obtained without this knowledge.**
Thanks for pointing this out. In fact, for the strongly observable graph case, we are able to achieve the same regret bound **without** the knowledge of $\alpha_t$ by picking the parameter $\gamma$ in an adaptive way. Specifically, note that for strongly observable graphs, $\gamma\cdot \min_{p\in \Delta_K}\rm\overline{dec_{\gamma}}(p)\leq\widetilde{O}(\alpha_{t})$ at round $t$. Therefore, we start from $\gamma_1=\sqrt{T}$ and keep track of the following value $V_t=\sum_{\tau=1}^t\gamma_{\tau}\cdot \min_{p\in\Delta_K}\rm\overline{dec_{\gamma_\tau}}(p)$, which is indeed obtainable by solving the convex program. When this value is doubled, we double $\gamma_t$ and restart the algorithm; otherwise, we set $\gamma_{t+1}=\gamma_t$. This adaptive tuning does not require the knowledge of $\alpha_t$ and achieve the same $\widetilde{O}\left(\sqrt{\sum_{t=1}^T \alpha_t}\right)$ regret bound with a factor of $\log \left(\sum_{t=1}^T\alpha_t/T\right)$ overhead.
For the weakly observable graph case, since the weak domination number of a graph can be approximated efficiently within a factor of $\log K$, we are able to apply doubling trick to adaptively tune $\gamma$ by approximating the weak domination number directly on the sequence of observed graphs and achieve the same $\widetilde{O}(T^{1/3}(\sum_{t=1}^Td_t)^{1/3})$ regret bound. We will include the adaptive tuning part in the appendix in the next revision.
**2. Given the result presented in the paper, a very natural question that arises is whether or not for stochastic contexts and losses, the approach of [2] can be generalized to the feedback setting in order to obtain similar regret bounds, but with access to an offline regression oracle. I think such a result would be also very interesting, and I would expect that similar approaches to those that the authors used in this paper would work for extending [2], as their approach also extends SquareCB of [1]. I'd appreciate it if the authors could comment on whether or not they considered the stochastic setting as well, and if they think that they're approach could be used in order to obtain results with an offline oracle.**
Thanks for pointing this out! In fact, in an ongoing follow-up, we have already had some results with respect to the stochastic context and loss using offline regression oracle. Specifically, we are able to obtain the same $\widetilde{O}(\sqrt{\alpha T})$ regret bound when the graph is self-aware and $\widetilde{O}(d^{1/3}T^{2/3})$ when the graph is weakly observable.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for their detailed response. After reading the other reviews and comments, I have no further questions about the paper and I'm inclined to leave my score as it is. | Summary: This work is concerned with the problem of contextual bandits with graph feedback. The authors consider a setting where the contexts and the graphs are generated in an arbitrary manner and revealed to the learner at the beginning of each round. For a given context, the mean loss of each arm is assumed to be fixed. Furthermore, it is assumed that the learner is provided with a class of functions mapping context-action pairs to their mean loss, and that this class contains the true function. Following prior works in contextual bandits, the authors use an online regression oracle to estimate the true mean losses. These estimates are then used within the estimation-to-decision framework of Foster et al. (2021). Within this framework, the proposed algorithm requires solving a convex program at each round, for which closed form solutions are provided for special cases of interest. The authors prove regret bounds that depend on the structure of the feedback graphs and the regret of the online regression oracle. Additionally, empirical evaluations are carried out to showcase the algorithm's ability to take advantage of the side observations provided via the feedback graphs.
Strengths: Although the adopted approach relies on existing techniques, the application of the estimation-to-decision framework of Foster et al. (2021) for bandits with graph feedback is still an interesting contribution. Most notably, in Theorems 3.2 and 3.4, the authors bound the decision estimation coefficient for their algorithm in terms of the independence number for strongly observable graphs and the weak domination number for weakly observable graphs, which respectively are the graph theoretic quantities that characterize the minimax regret for these settings. Overall, the paper is well written, the setting is adequately motivated, and the clarity of the presentation is decent.
Weaknesses: - The regret bounds in Corollaries 3.3 and 3.5 are stated in terms of a uniform upper bound on the independence number or the weak domination number of the observed graphs. This is unsatisfactory since a single sparse graph could render the bound vacuous.
- In the formulated setting, the graphs are allowed to be stochastic, where each edge is realized with a certain probability which is revealed to the learner at the beginning of the round. However, all the provided theoretical results are for deterministic graphs.
- One point in need of clarification concerning the experiments is that it is mentioned in the beginning of Section 5 that all the graphs used in the experiments are deterministic, while the experiment described in Section 5.2.1 seems to involve stochastic graphs.
- For the experiment of Section 5.1, it might be more informative to include more intermediate cases between bandits and full information.
- A minor correction: In the discussion section, the authors address the limitation of their approach in the uniformed setting, where the feedback graph is only revealed to the learner after making a decision. The authors then cite results for the non-contextual case from (Cohen et al, 2016). However, in that work, the graph is never revealed to the learner, only the actions in the neighbourhood of the played action and their losses are observed.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: - Is it possible to obtain bounds that scale with the average independence number (or weak domination number) of the graphs perhaps via an adaptive choice of the value of gamma at each round?
- What kind of regret bounds can we obtain for stochastic graphs using this approach?
- Does the proposed approach offer an advantage over existing algorithms for learning with graph feedback in the non-contextual case?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: The authors did address some of the limitations of their work. Notably the fact that their approach requires the knowledge of the feedback graph before choosing the action.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the valuable comments. We address the issues you mentioned as follows.
**1. The regret bounds in Corollaries 3.3 and 3.5 are stated in terms of a uniform upper bound on the independence number or the weak domination number of the observed graphs. This is unsatisfactory since a single sparse graph could render the bound vacuous. (Is it possible to obtain bounds that scale with the average independence number (or weak domination number) of the graphs perhaps via an adaptive choice of the value of gamma at each round?**
This is a good point. In fact, we write Corollary 3.3 and 3.5 with respect to the maximum independence number $\alpha$ and maximum weak domination number $d$ only for simplicity, but our regret bound scales with respect to the averaged one $\frac{1}{T}\sum_{t=1}^T\alpha_t$ and $\frac{1}{T}\sum_{t=1}^Td_t$ by applying a doubling trick on the choice of $\gamma$. Specifically, for the strongly observable graph case, we set $\gamma=\sqrt{T}$ initially. If at some round t, $\gamma<\sqrt{\sum_{\tau=1}^t\alpha_{\tau}}$, then we double $\gamma$ and restart the algorithm; otherwise, we keep using the same $\gamma$. This gives the $\widetilde{O}\left(\sqrt{\sum_{t=1}^T\alpha_t}\right)$ regret bound. Applying a similar doubling trick for the weak domination number gives $\widetilde{O}\left(T^{1/3}\left(\sum_{t=1}^Td_t\right)^{1/3}\right)$ regret bound as well. For an improved adaptive tuning method **without even knowing/computing $\alpha_t$ and $d_t$**, we refer to our response (1) for Reviewer WhSo.
**2. In the formulated setting, the graphs are allowed to be stochastic, where each edge is realized with a certain probability which is revealed to the learner at the beginning of the round. However, all the provided theoretical results are for deterministic graphs. (What kind of regret bounds can we obtain for stochastic graphs using this approach?)**
While our algorithm is technically defined for general stochastic graphs, in order to compare our results to the regret bound for standard bandits with feedback graphs, we consider the deterministic case and show that for both strongly observable and weakly observable graph cases, our regret bound matches the minimax rate . For stochastic graphs, if the realized feedback graphs are all strongly observable and $G_t(i,j)$ defines the marginal probability that edge (i,j) is realized, then our algorithm achieves $\widetilde{O}\left(\sqrt{\sum_{t=1}^T\alpha_t}\right)$ regret where $\alpha_t$ is defined as the expected independence number given the stochastic feedback graph $G_t$. Similarly, for weakly observable graphs, our algorithm is able to achieve $\widetilde{O}\left(\left(\sum_{t=1}^Td_t^{1/3}\right)T^{1/3}\right)$ where $d_t$ is the expected weak domination number of the graph. To achieve this bound, we need to tune $\gamma$ in an adaptive way and we refer the reviewer to our response (1) of reviewer WhSo.
**3. One point in need of clarification concerning the experiments is that it is mentioned in the beginning of Section 5 that all the graphs used in the experiments are deterministic, while the experiment described in Section 5.2.1 seems to involve stochastic graphs.**
In Section 5.2.1, we sample a graph from a certain distribution and this (deterministic) graph is informed to the learner at the beginning of each round. Thanks for pointing this out, and we will clarify it.
**4. For the experiment of Section 5.1, it might be more informative to include more intermediate cases between bandits and full information.**
Following the reviewers suggestion, we conducted an additional experiment on dataset RCV1 with the inventory graph (whose independence number is 1) and the averaged regret is shown as follows:
| round:t | averaged regret |
|---------|-----------------|
| 10000 | 0.3345 |
| 20000 | 0.2739 |
| 30000 | 0.2488 |
| 40000 | 0.2361 |
| 50000 | 0.2260 |
which is close to the regret in the full-info and cops-and-robbers cases, showing that the regret indeed scales with the independence number.
**5. Does the proposed approach offer an advantage over existing algorithms for learning with graph feedback in the non-contextual case?**
No, it does not. Our approach extends the recent framework of realizable contextual bandit with regression oracle to the more general feedback graph model and achieves the optimal regret bound in both strongly observable and weakly observable graph cases. Our contribution is focused on the contextual bandit case. While our approach does achieve minimax rates in the (non-contextual) bandit setting, in that setting, existing minimax-optimal algorithms (eg. Exp3.G in [1]) are available which make less assumptions and are therefore preferable (e.g., non-contextual algorithms do not require realizability as they essentially operate directly in policy space; moveover, non-contextual algorithms exist even for the uninformed graph setting).
Thanks for pointing out the other minor issues in the discussion section and we will fix that in the next revision.
[1] Noga Alon, Nicolo Cesa-Bianchi, Ofer Dekel, and Tomer Koren. Online learning with feedback graphs: Beyond bandits. In Conference on Learning Theory, pages 23–35. PMLR, 2015.
---
Rebuttal Comment 1.1:
Comment: Thank you for your response. My evaluation of the paper remains the same; the analysis of the DEC in the feedback graphs setting is an interesting contribution, which would be made more complete with an added discussion on the adaptive choice of gamma and the achievable rates for stochastic graphs. | Summary: This work studies the contextual bandits problem in the presence of a feedback graph G_t. An edge (i -> j) in G_t means that taking action a_i allows us to observe the loss for action a_j. The work extends the SquareCB algorithm to this setting, the primary difference being the way the action sampling probability p_t is learned (Eq. (1) vs Eq (4)). The authors also present regret bounds for strongly observable and weakly observable graphs where the bounds improve over the standard contextual bandits setting when the independence number and cardinality of the dominating set is small.
Strengths: - Overall I found the paper to be well-written where the setup was explained well and the notation was easy to follow.
- This paper builds on the SquareCB algorithm. I appreciated the fact that the work makes an effort to try to separate their own contributions from the SquareCB paper.
- The paper covers both strongly and weakly observable graphs. In general, the statements in Theorems 3.2 and 3.4 are fairly intuitive.
Weaknesses: - By and large, the paper relies heavily on the SquareCB paper's analysis. Although the setting of feedback graphs is new, the analysis seems derivative.
- I would suggest that the authors provide more intuition behind the proofs of the key theorems. Even though the theorem statements are easy to understand, it would be nice to get some insight into the proofs, and specifically what are steps different from the standard CB regret analysis.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: - Are there any intermediate characterizations of graphs between strongly and weakly observable? In the strongly observable case, for nodes without self-loops, you require an edge from all other nodes. What if there are edges from most nodes (but not all)? Or what if most nodes are strongly observable but not all? Broadly, I am trying to understand how conservative the bounds are when the graphs are weakly observable (but perhaps not too weak). Is the algorithm expected to the complexity of the graph somehow?
- Follow-up to the above, the regret bounds take the worst-case graph over all the rounds. You allow the graph to change over t. Does the regret of your proposed algorithm also improve when the graphs are favorable for a large number of time steps?
(CB is not my area of expertise so I understand that both of the above assumptions are probably standard for doing regret analysis, but I think experimentally it would be nice to get some insight into how the actual algorithm performs relative to the theoretical regret).
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: See weaknesses part.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the valuable comments. We address the issues you mentioned as follows.
**1. By and large, the paper relies heavily on the SquareCB paper's analysis. Although the setting of feedback graphs is new, the analysis seems derivative.**
The analysis of SquareCB relies on the constructive inverse gap weighting probability distribution (which is a closed form solution) to bound the DEC term. However, this kind of closed form solution does not exist in the bandit with feedback graph problem. It is even unclear whether this min-max problem can be efficiently solved in the case with feedback graphs. In our analysis, we first show that the min-max problem can indeed be solved efficiently. Then, we use a careful combination of the Sion’s minimax theorem and the graph-theoretic lemma in [1] to bound the DEC term and to achieve the minimax regret for different types of feedback graphs, which is non-trivial.
**2. I would suggest that the authors provide more intuition behind the proofs of the key theorems. Even though the theorem statements are easy to understand, it would be nice to get some insight into the proofs, and specifically what are steps different from the standard CB regret analysis.**
Thanks for the suggestion. We will highlight the differences between SquareCB regret analysis and our regret analysis in the revised version. Furthermore, we will provide additional insights of our proofs, particularly focusing on 1) proving that the min-max problem can be efficiently solved; 2) how we apply the graph-theoretic lemma in bounding the DEC term with feedback graph — a novel aspect that has not been studied before.
**3. Are there any intermediate characterizations of graphs between strongly and weakly observable? In the strongly observable case, for nodes without self-loops, you require an edge from all other nodes. What if there are edges from most nodes (but not all)? Or what if most nodes are strongly observable but not all? Broadly, I am trying to understand how conservative the bounds are when the graphs are weakly observable (but perhaps not too weak). Is the algorithm expected to the complexity of the graph somehow?**
As proven in Theorem 9 of Alon et al. 2015, even if the graph has one weakly observable node, we can construct an instance such that the regret is at least $\mathcal{O}(d^{1/3}T^{2/3})$ (and as shown in Corollary 3.5, our algorithm already achieves this minimax rate). Regarding relative difficulty: minimax results indicate within each major graph class (strongly/weakly observable graphs) the rate is determined, but the relative difficulty of the problem is affected by an associated graph-theoretic quantity (independence or weak-domination number, respectively) which affects constants. Our contextual results pleasantly correspond to the existing known non-contextual results.
**4. Follow-up to the above, the regret bounds take the worst-case graph over all the rounds. You allow the graph to change over $t$. Does the regret of your proposed algorithm also improve when the graphs are favorable for a large number of time steps?**
In fact, our regret bound scales with respect to $\sum_{t=1}^T\alpha_t$ ($\sum_{t=1}^Td_t$) in the strongly (weakly) observable case by applying a doubling trick on the choice of $\gamma$. We refer the reviewer to our response for question 1 of Reviewer n8Cc and Reviewer WhSo for details on how to tune $\gamma$. Therefore, when the graphs are favorable, i.e., the independence number is small, our regret bound also improves upon the one with the worst-case graph.
[1] Noga Alon, Nicolo Cesa-Bianchi, Ofer Dekel, and Tomer Koren. Online learning with feedback graphs: Beyond bandits. In Conference on Learning Theory, pages 23–35. PMLR, 2015.
---
Rebuttal Comment 1.1:
Title: Response
Comment: I thank the authors for their response. My (positive) evaluation of the paper remains the same. | Rebuttal 1:
Rebuttal: We thanks all the reviewers for their valuable comments, especially for pointing out the adaptive tuning issue of the parameter of $\gamma$ without requiring knowledge of the graph-theoretic quantities and beyond the worst-case graph. We address your issues in separate sections as follows. | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: This work studied contextual bandits with feedback graphs. The authors provided an algorithm based on the recent Decision-Estimation Coefficient (DEC) framework that finds the next-step action distribution by solving a minimax optimization problem. To address the issue that the minimax problem is hard to solve, the authors also showed that there exists an efficient implementation of the solver, and the closed-form solution also exists for some special cases. For the experiment, the authors compared the proposed algorithm with a vanilla baseline algorithm that does not utilize the feedback graph information.
Strengths: The presentation is clear. The overall approach to solving the contextual bandits with feedback graph is convincing. The theoretical results are sound.
Weaknesses: The importance of this work remains unclear. The proposed algorithm is very similar to the original E2D algorithm proposed by Foster et al. 2021 with an additional expectation over the feedback graph. The analysis technique is also very similar. The key difficulty in proving the regret of SquareCBG is to build an effective estimate the DEC constant. However, from the proof of Theorem 3.2, it seems that the proof is pretty standard by following the decomposition technique developed in Foster et al. 2021 (like their Proposition 5.1), while utilizing the graph node estimation lemma proposed by Alon et al. 2015. Therefore, I would recommend the authors highlight the main challenge for them to derive the theoretical results.
A typo. Line 62, for some action $j$-> $a_t$.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: See Weaknesses.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The authors addressed the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the valuable comments. We address the issues you mentioned as follows.
**1. The importance of this work remains unclear. The proposed algorithm is very similar to the original E2D algorithm proposed by Foster et al. 2021 with an additional expectation over the feedback graph. The analysis technique is also very similar. The key difficulty in proving the regret of SquareCB.G is to build an effective estimate of the DEC constant. However, from the proof of Theorem 3.2, it seems that the proof is pretty standard by following the decomposition technique developed in Foster et al. 2021 (like their Proposition 5.1), while utilizing the graph node estimation lemma proposed by Alon et al. 2015. Therefore, I would recommend the authors highlight the main challenge for them to derive the theoretical results.**
We agree that our graph-based DEC term is inspired by the work of [1]. However, the E2D algorithm is defined for general sequential learning problems and it is unclear whether this is an **efficient** algorithm for learning for contextual bandit with **general graph** feedback due to the complexity of solving the min-max problem. In [1], all the instances of the DEC upper bound is derived in the bandit feedback case while we propose the **first efficient algorithm** for contextual bandit with general graph feedback with optimal regret bound. This is achieved by first showing that the minimax problem can be solved in an efficient way and then proving that DEC is well-bounded with respect to corresponding graph-theoretic numbers. Although the graph-theoretic lemma in [2] is an existing technique, how to apply it to bounding the specific DEC term for contextual bandit with feedback graph is unclear and requires non-trivial analysis. The analysis for contextual bandit in [1] relies on the constructive inverse gap weighting probability distribution, which is a closed form solution to the min max problem. However, this kind of closed form solution does not exist in our problem. Instead, in our analysis, we use a careful combination of Sion’s minimax theorem and the graph-theoretic lemma and achieve minimax regret bound $\widetilde{\mathcal{O}}(\sqrt{\alpha T})$ for strongly observable graphs. Moreover, we analyze the weakly observable graphs, where the available feedback is less informative than the bandit feedback. In this scenario, we also achieve the minimax regret bound $\widetilde{\mathcal{O}}(d^{1/3}T^{2/3})$.
Notably, we also want to highlight that our contribution is not limited to the theoretical side but our algorithm is practical and can be implemented efficiently in practice. Our experimental results prove the effectiveness of our approach.
**2. A typo. Line 62, for some action $j\rightarrow a_t$**
This is not a typo. In Line 62, we consider the stochastic feedback graph case. Given the selected action $a_t$ and the stochastic feedback graph $G_t$, we observe the loss of action $j$ with probability $G_t(a_t,j)$.
[1] Dylan J Foster, Sham M Kakade, Jian Qian, and Alexander Rakhlin. The statistical complexity of interactive decision making. arXiv preprint arXiv:2112.13487, 2021.
[2] Noga Alon, Nicolo Cesa-Bianchi, Ofer Dekel, and Tomer Koren. Online learning with feedback graphs: Beyond bandits. In Conference on Learning Theory, pages 23–35. PMLR, 2015. | null | null | null | null | null | null |
Fine-Grained Visual Prompting | Accept (poster) | Summary: This paper proposes Fine-Grained Visual Prompting (FGVP) that incorporates Blur Reverse Mask to improve the semantic localization capability of VLMs, like CLIP. It provides a comparison to other possible methods for highlighting the different parts/objects in the image, based on SAM and other techniques. The resulting method improves the performance on RefCOCO* datasets.
Strengths: The paper is written in a clear way and describes the algorithm well. The evaluation shows clear improvements and the ablation study is convincing. The idea to blur and mask the background of different object/part proposals is both innovative and significant.
Weaknesses: The related work section missing two relevant works in the visual prompting domain:
1. Bhang et al., "Exploring Visual Prompts for Adapting Large-Scale Models", 2022
2. Bar et al., "Visual Prompting via Image Inpainting", NeurIPS, 2022
Moreover, while SAM is a powerful method for segmentation, it requires running the model with a relatively dense grid of keypoints. This runtime can be significant and should be discussed in the limitations section.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Can SAM be replaced with a simpler segmentation model, like color-based unsupervised segmentation methods?
Any ablation of SAM will be useful here for understanding the scale/quality that is required from the segmentation network for achieving good results.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The limitation section is present and addresses some unexplored directions. I suggest adding runtime estimates.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the comments and suggestions!
**Q1**: Missing related works in the visual prompting domain.\
**A1**: Thanks, we will cite these works in the revised version.
**Q2**: The runtime should be discussed since the method requires running the model with a relatively dense grid of keypoints.\
**A2**: Thanks for your advice, we will add and discuss the inference runtime limitations in the revised version.\
**Firstly**, we propose a method that uses the proposal boxes from a detector to generate masks. Consequently, proposals are sparse and their quantity depends on the outputs of the detector. In this situation, the time comparison results are presented in our **global response A1**.\
**Secondly**, under the framework where no detector is available and we need to use dense grid points as proposals, the runtime experiments are implemented in the **global response A2**.
**Q3**: Can SAM be replaced with a simpler segmentation model, like color-based unsupervised segmentation methods? More ablation studies of SAM for understanding the scale/quality to achieve good results.\
**A3**: **First of all**, SAM could be replaced with another segmentor; we implement the unsupervised mask generator in FreeSOLO to produce masks. Also, we use SAM of different scales to understand how the scale/quality affects the final results. The results can be found in the **response to Reviewer Xx88 A1**. \
**Notably**, we also conducted more experiments in the supplementary materials (B.4 Robustness of Mask Precision) on the Robustness of Mask Precision by manually expanding or shrinking the mask derived from SAM. I hope it will provide more information about the mask quality. | Summary: This paper proposes a new “visual prompting” method. Visual prompting refers to the idea of altering images to guide the “attention” of a vision-language model when the model is used to embed the image. For example, to obtain an embedding for an object in an image that contains many objects, the user could draw a red circle around the object of interest and then embed the image. Since the vision-language model might have seen images during training in which important objects are highlighted with red circles, the resulting embedding might focus on the circled object.
This approach can be used to partially solve tasks such as referring expression comprehension: Given a set of bounding boxes and text descriptions, visual prompting can be used to find the best-fitting description for each box. A separate object detector is needed to obtain the bounding boxes first.
The paper proposes a visual prompting method that consists of blurring everything in the image except for the object of interest. This is done by first using a pretrained segmentation model (Segment Anything) to obtain a mask for the object of interest, and then blur everything outside of the mask. The motivation for this approach is that it simulates the shallow depth of field seen in photographs taken with a large aperture, which are commonly found in VLM training data.
The blur method is compared to the “red circle” method and several variants in object detection and referring expression tasks. The blur method consistently performs best. In addition, hyperparameter sweeps for the blur radius and other hyperparameters are shown.
Strengths: 1. The proposed visual prompting method consistently improves over other methods.
2. The blurring approach is well motivated by the abundance of photographs with shallow depth of field. This is a clever way to exploit “natural supervision” present in large-scale web image training data.
Weaknesses: 1. The proposed method is significantly more complex than the “red circle” method, since it relies on a large segmentation model. The inference cost of the proposed method is therefore much higher than the “red circle” method. This should be acknowledged in the discussion and/or limitation sections.
2. While the proposed method works well, it is an incremental improvement over the “red circle” idea (https://arxiv.org/pdf/2304.06712.pdf) and the evaluation is not as comprehensive as in the “red circle” paper. For example, no analysis of the relative biases of red circle vs blur methods are performed, and no failure cases are discussed. It is not clear if the contribution is substantial enough for NeurIPS.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Please use descriptive method names (keypoint/box/circle/…) instead of just A/B/C in Table2 and elsewhere, so that the reader doesn't have to refer to Figure 1 all the time.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: The runtime cost of the proposed method needs to discussed further.
The biases of the method compared to the "red circle" method need to be evaluated and discussed.
A generic "broader impact" statement is given in the appendix.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the comments and suggestions!
**Q1**: The inference cost of the proposed method.\
**A1**: We present the inference cost comparison experiments in the **Global Response A1 and A2**. Thank you for your advice; we will acknowledge it in our limitation sections in the revised version.
**Q2**: Analysis of the relative biases of red circle vs blur methods.\
**A2**: Following the analysis section in the paper of RedCircle, we **first** select one figure in COCO containing a male and a female. We visualize their categories classified by CLIP with added criminal-related texts under different visual prompting methods. The comparison of relative biases among RedCircle and FGVP is depicted in **Figure R2 in the PDF in the global response**. From the figure, RedCircle tends to classify persons into criminal categories, while FGVP and the original image without prompting manage to classify correctly.
It is mainly because a red circle is not a natural marking compared to the training web-scale data of CLIP. However, FGVP possesses more non-post-processed characteristics which helps to reduce the biases.
**Next**, we quantify the biases following RedCircle based on the same datasets of FairFace and COCO. Additionally, we experimented with each visual prompting strategy on COCO with and without cropping the person out of the entire image as a pre-processing operation. From the following table, we can observe that although our FGVP might introduce a few more biases than the raw image, it substantially reduces biases compared to RedCircle due to its more natural prompting design.
|Model|Visual Prompt|FairFace|COCO w/ crop|COCO w/o crop|
|:-:|:-:|:-:|:-:|:-:|
|ViT-L/14@336px|Crop|13.0|40.8|43.6|
|ViT-L/14@336px|RedCircle|20.6 (+7.6)|49.9 (+36.9)|69.3 (+56.3)|
|ViT-L/14@336px|FGVP|15.9 (+2.9)|34.1 (-6.7)|47.8 (+4.2)|
|ViT-B/32|Crop|14.5|27.2|34.9|
|ViT-B/32|RedCircle|22.0 (+7.5)|44.1 (+29.6)|68.6 (+54.1)|
|ViT-B/32|FGVP|8.2 (-6.3)|19.5 (-7.7)|15.8 (-19.1)|
|RN50×16|Crop|19.5|55.1|50.7|
|RN50×16|RedCircle|38.5 (+19.0)|71.4 (+51.9)|72.6 (+53.1)|
|RN50×16|FGVP|21.6 (+2.1)|56.0 (+0.9)|28.9 (-21.8)|
**Q3**: Failure cases visualization and analysis. \
**A3**: Please refer to the failure case visualizations in **Figures R4 and R5 in the PDF document** and the corresponding analysis in the **global response A3**. **Lastly**, it's essential to highlight that we've established a comprehensive framework to facilitate evaluation comparisons among diverse visual prompting techniques and their post-processing ensembles. These aspects were not addressed by RedCircle.
**Q4**: Use descriptive method names (keypoint/box/circle/…) instead of just A/B/C in the Table.\
**A4**: Thanks for pointing out this inconvenience. We will address this in the revised version.
---
Rebuttal Comment 1.1:
Comment: Thank you for your response. The response address my main questions. The additional analyses, in particular the bias analyses, provide a strong argument for using the proposed method over the "red circle" method. Also, the inference cost analysis shows that while the proposed method is more expensive, there are ways to speed it up and the additional cost is not excessive. I therefore raised my recommendation to "Accept".
---
Reply to Comment 1.1.1:
Title: Response to Reviewer Rf2R
Comment: We greatly appreciate your valuable feedback, as well as the time and dedication you invested in thoroughly reading and comprehending both our paper and our response. | Summary: This paper works on visual prompting. They proposed FGVP, together with Blur Reverse Mask, to improve the semantic localization ability of the vision-language model.
Strengths: Solid experiments showing the effectiveness of their method.
Weaknesses: In general, this is a good work. However, there’re several things concerning me:
- The novelty of this work is not well established. Seems like an engineering combination of previous works.
- The discussion of the upper bound is based on the assumption, while the legitimacy of the assumption is not well discussed.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: No further questions.
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: No notable limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the comments and suggestions!
**Q1**: The novelty of this work is not well established. Seems like an engineering combination of previous works.
**A1**: **Firstly**, thank you for your concern. With the development of large vision-language models and segmentors, they potentially embed knowledge for various downstream tasks. However, their basic usage mainly focuses on global-level tasks and class-agnostic segmentation. **They cannot be easily adopted to achieve high performance in instance-wise localization and classification tasks.** Besides, additional training for specific tasks end-to-end would be costly. Therefore, developing a zero-shot architecture is an elegant solution, which can unleash the potential of large models and combine prior knowledge to effectively tackle certain tasks without specific tuning. Simultaneously, such an architecture can also serve as a good baseline for further improvement, providing deeper insights for related work.
**Then**, we would like to reclaim our contributions and inspirations:
- To better align target regions with their captions, we propose to **highlight the target instances according to their semantic mask by blurring the background as fine-grained visual prompting**.
- After extensive experiments across diverse visual prompts, we demonstrate the superior performance of FGVP, which achieves SOTA performance on **zero-shot benchmarks**.
- The background blurring strategy is aligned with the natural photography images in web-scale data used for training VLMs, which **exploits potential knowledge within large models**.
**In conclusion**, as mentioned by the reviewer Xx88, our work “**presents a novel idea that using SAM to generate semantic masks as better visual prompts, which is an original idea not explored in prior works**” and “**is a clever way to exploit 'natural supervision' present in large-scale web image training data**” by the reviewer Rf2R. For these reasons, we believe it is promising to further explore and improve the FGVP.
**Q2**: The discussion of the upper bound is based on the assumption, while the legitimacy of the assumption is not well discussed.
**A2**: From the table in the response to **Reviewer Xx88 A1**, we conducted an ablation study of the performance under different mask qualities. The **ground truth masks achieve almost the best overall performance** compared to those generated by SAM or other segmentors. \
**Notably**, the masks generated by SAM achieve competitive or even better results compared to GT. An important reason is that the GT mask in the COCO dataset is annotated with relatively rough polygons, which has a certain gap with the high-quality mask from SAM.\
**In fact**, instead of considering the ground truth mask as an upper bound, it is mainly **served as a unified and constant evaluation benchmark to measure and compare different visual prompts**, as GT masks have already existed for convenient inference.
---
Rebuttal Comment 1.1:
Comment: Thanks for your response. The response addresses most of my concerns. I thereby change my rating to 6.
---
Reply to Comment 1.1.1:
Title: Response to Reviewer Z75J
Comment: We greatly appreciate your valuable feedback, as well as the time and dedication you invested in thoroughly reading and comprehending both our paper and our response. | Summary: This paper proposes a visual prompting method that exploits the segmentation masks of interested objects in images to generate more fine-grained visual prompts. Experiments show that the proposed methods achieve competitive results on zero-shot referring expressions comprehension and part detection.
Strengths: 1. This paper is well-organized. The motivation and the framework is clearly presented.
2. The experiments confirm the effectiveness of the visual prompt design.
Weaknesses: 1. The ablation studies of different VLMs are missing. Since visual prompting is a zero-shot framework, it is natural that the performance of the visual prompts differ on VLMs trained on different data. The paper adopts the CLIP as the VLM in all experiments. How do different visual prompts perform on other VLMs?
2. Figures 2 & 3 are very similar and thus redundant. The authors should consider merging them into one figure.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. Basically, I like the section 3.3 of the discussion of the prior knowledge introduced by the large amount of photography images in the large-scale image-text dataset. Could the authors provide more discussions on the analysis of the training data of the VLMs, and the effect of better alignments between the visual prompt design and the VLM training data?
2. How good are the zero-shot results compared with the few-shot / many-shot methods? The few-shot / many-shot methods may be provided for general reference.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The proposed visual prompts can be further verified on more object-based tasks.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the comments and suggestions!
**Q1**: Ablation studies on different VLMs. \
**A1**: The results across various VLMs demonstrate the consistent improvement of FGVP. Importantly, we observe that RedCircle experiences a significant performance decline when transitioning from CLIP to other models like SLIP, aligning with the results reported in the original RedCircle paper. In contrast, FGVP maintains its performance gain across different architectures.
|Method|Backbone|Data|Params|Input size|Crop|RedCircle|FGVP|
|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|
|OpenAI CLIP|ViT-L/14@336px|CLIP-400M|304M|336|45.3|48.9|52.8|
|OpenAI CLIP|ViT-B-32|CLIP-400M|87M|224|47.8|44.0|51.2|
|Open CLIP|ViT-L/14|LAION-2B|304M|224|49.9|48.1|49.8|
|SLIP|ViT-L/16|YFCC-15M|303M|224|44.3|33.7|49.3|
|BLIP-v2|ViT-L/14|Merged-129M|304M|224|46.9|37.7|51.0|
|EVA-02-CLIP|ViT-L/14@336px|Merged-2B|304M|336|51.4|51.5|55.8|
**Q2**: Figures 2 & 3 are very similar and thus redundant. The authors should consider merging them into one figure.\
**A2**: Thank you for your proposal. There are significant design differences between the two mentioned frameworks. One requires a detector, while the other does not. The masks in the first framework are sparse, tied to proposal boxes, whereas the latter generates dense masks from SAM prompted with grid points. Describing and showcasing the two frameworks separately will help highlight the differences in design and effect details. After careful consideration, we chose to keep both frameworks independent.
**Q3**: More discussions on the training data of the VLMs, and the effect of better alignments between the data and visual prompt design.\
**A3**: Thanks for valuing our discussion. We hope the biases analysis could provide more insight into the visual prompting-training data alignment. In **Figure R2 in PDF in the global response**, we added criminal text to regular COCO images with people. The RedCircle tends to classify the person into criminal classes while FGVP does not. We assume that a red circle is unnatural and lacks realism compared to web images. On the contrary, the image with a blurred background tends to look more natural and common rather than artificially post-processed, which helps to reduce unexpected bias. For detailed quantitative results, please refer to **Response to Reviewer Rf2R A2**.
**In an ideal scenario**, our prompt may be better aligned with VLM training data by factoring in image depth. This involves gradual depth-based blurring incorporated with depth estimation results. Capturing natural shallow depth-of-field could be better than the current uniform blur.
**Q4**: Compared with the few-shot / many-shot methods.\
**A4**: We present the best performance as reported in the original paper for the methods listed below. The results are partly summarized from the original papers of Pseudo-Q [1] and CPT [2].\
[1] Jiang H, Lin Y, Han D, et al. Pseudo-q: Generating pseudo language queries for visual grounding. CVPR, 2022.\
[2] Yao Y, Zhang A, Zhang Z, et al. Cpt: Colorful prompt tuning for pre-trained vision-language models. arXiv preprint arXiv:2109.11797, 2021.
||||RefCOCO|RefCOCO|RefCOCO|RefCOCO+|RefCOCO+|RefCOCO+|RefCOCOg|RefCOCOg|
|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|
|Method|Published|Supervision|val|test-A|test-B|val|test-A|test-B|val|test|
|MAttNet|CVPR’18|Full|76.7|81.1|70.0|65.3|71.6|56.0|66.6|67.3|
|NMTree|ICCV’19|Full|76.4|81.2|70.1|66.5|72.0|57.5|65.9|66.4|
|FAOA|ICCV’19|Full|72.5|74.4|68.5|56.8|60.2|49.6|61.3|60.4|
|ReSC|ECCV’20|Full|77.6|80.5|72.3|63.6|68.4|56.8|67.3|67.2|
|TransVG|ICCV’21|Full|80.3|82.7|78.1|63.5|68.2|55.6|67.7|67.4|
|VC|CVPR’18|Weak|\--|33.3|30.1|\--|34.6|31.6|\--|\--|
|ARN|ICCV’19|Weak|34.3|36.4|33.1|34.5|36.0|33.8|\--|\--|
|KPRN|ACMMM’19|Weak|35.0|34.7|37.0|36.0|35.2|37.0|\--|\--|
|DTWREG|TPAMI’21|Weak|39.2|41.1|37.7|39.2|40.1|38.1|\--|\--|
|CPT|ArXiv’21|8-shot|41.3|48.2|35.7|42.6|49.3|35.4|47.4|47.4|
|CPT|ArXiv’21|4-shot|40.7|47.4|35.3|40.3|46.5|34.5|44.4|44.4|
|CPT|ArXiv’21|2-shot|39.8|45.6|33.9|38.6|44.5|32.8|44.7|44.3|
|CPT|ArXiv’21|1-shot|37.2|41.5|33.2|37.9|42.3|33.9|43.1|43.4|
|CPT|ArXiv’21|zero-shot|32.2|36.1|30.3|31.9|35.2|28.8|36.7|36.5|
|Pseudo-Q|CVPR’22|zero-shot|56.0|58.3|54.1|38.9|45.1|32.1|46.3|47.4|
|ReClip|ArXiv’22|zero-shot|45.8|46.1|47.1|47.9|50.1|45.1|59.3|59.0|
|RedCircle|ArXiv’23|zero-shot|49.8|58.6|39.9|55.3|63.9|45.4|59.4|58.9|
|FGVP (ours)|ArXiv’23|zero-shot|59.6|65.0|52.0|60.0|66.8|49.7|63.3|63.4|
**Q5**: The proposed visual prompts can be further verified on more object-based tasks.\
**A5**: Thank you for your advice. We plan to extend FGVP to more grounding and open vocabulary benchmarks in our future works. Here, we provide experiments on two more object-based tasks.\
**1)** **Firstly**, we extended FGVP to open vocabulary detection based on the current state-of-the-art work OV-Seg [1].\
[1] Liang F, Wu B, Dai X, et al. Open-vocabulary semantic segmentation with mask-adapted clip. CVPR, 2023.
|Method|Segmentor backbone|Clip|PAS-20|
|:-:|:-:|:-:|:-:|
|OVSeg|Swin-B|ViT-L|94.5|
|OVSeg-Blur|Swin-B|ViT-L|95.1|
**2)** **Next**, we experimented with FGVP on the Referring Image Segmentation benchmark. The table shows that we outperformed the current state-of-the-art Global-Local CLIP [2] by replacing feature cropping with our FGVP.\
[2] Yu S, Seo P H, Son J. Zero-shot Referring Image Segmentation with Global-Local Context Features. CVPR, 2023.
|Method|Visual Encoder|oIoU|mIoU|
|:-:|:-:|:-:|:-:|
|Cropping|ViT-B/32|22.7|24.8|
|Global-Local CLIP|ViT-B/32|24.8|26.2|
|FGVP+Global-Local CLIP|ViT-B/32|25.1|26.6|
---
Rebuttal Comment 1.1:
Title: Thanks for the rebuttal
Comment: Thank the authors for their response. Since the rebuttal addresses most of my concerns, I raise my score to "accept".
---
Reply to Comment 1.1.1:
Title: Response to Reviewer jMVP
Comment: We greatly appreciate your valuable feedback, as well as the time and dedication you invested in thoroughly reading and comprehending both our paper and our response. | Rebuttal 1:
Rebuttal: Thank you to all the reviewers for their valuable comments and suggestions! Here are the responses to some common concerns.
**Q1**: Experiments of inference cost with **detector proposals (Figure 2)**.\
**A1**: We conducted efficiency experiments, comparing inference costs in terms of computation and speed between our method and others. As FGVP relies on SAM for semantic masks, we ablate the **scalability** with various SAM backbone scales. Notably, the post-processing technique to filter small disconnected regions and holes in masks can further improve performance at the cost of speed. Disabling the mask-filter post-processing will **greatly improve the speed without losing too much performance**. Experiments are run on RefCOCO with a CLIP pretrained ViT-L/14@336px on 8×NVIDIA A100. Generally, FGVP takes more inference times than others, which will be acknowledged as a limitation and improvement direction in the revised version.
|Visual Prompt|SAM scale|Mask-filter|CUDA memory (GB)|Inference time (min)|Image per GPU second|Acc|
|:-:|:-:|:-:|:-:|:-:|:-:|:-:|
|Crop|\--|\--|0.91|4.49|5.03|45.3|
|RedCircle|\--|\--|0.91|4.00|5.64|48.9|
|FGVP|base|no|1.32|5.20|4.34|51.7|
|FGVP|base|yes|1.32|27.47|0.82|52.1|
|FGVP|large|no|2.14|6.29|3.59|51.0|
|FGVP|large|yes|2.14|27.49|0.82|52.2|
|FGVP|huge|no|3.42|7.34|3.08|51.9|
|FGVP|huge|yes|3.42|28.02|0.81|52.8|
**Q2**: Experiments of inference cost with **dense grid points as proposals (Figure 3)**.\
**A2**: Without provided detectors and proposals, we utilize SAM with dense grid points and an NMS threshold for mask filtering. We explore speed-performance trade-offs by varying grid sizes and NMS thresholds. All visual prompting methods, whether boxes, circles, or masks, rely on SAM for proposal yielding. Thus, the NMS threshold and grid size affect all methods equally. Experiments on PACO with a CLIP pretrained ViT-L/14@336px and SAM-huge on 8×NVIDIA A100 show that **FGVP could outperform RedCircle in speed and accuracy** at grid size 8 and NMS threshold 0.95 trade-off.
|Visual Prompt|Grid size|NMS threshold|Inference time (min)|Image per GPU second|Acc|
|:-:|:-:|:-:|:-:|:-:|:-:|
|Crop|16|0.7|13.34|3.27|16.5|
|Crop|32|0.95|37.25|1.17|19.5|
|RedCircle|16|0.7|12.75|3.42|17.4|
|RedCircle|32|0.95|34.18|1.28|19.9|
|FGVP|8|0.7|8.33|5.24|17.3|
|FGVP|8|0.95|9.17|4.76|20.5|
|FGVP|16|0.7|14.89|2.93|18.4|
|FGVP|16|0.95|17.29|2.52|22.0|
|FGVP|32|0.7|34.73|1.26|19.0|
|FGVP|32|0.95|39.66|1.10|23.2|
**Q3**: Granular and Failure Analysis\
**A3**: \
**1)** Per-class performance.\
We mainly compare the per-class performance of RedCircle and our FGVP on COCO. FGVP (Blur Reverse Mask) surpasses RedCircle in class accuracy for 53 out of 80 classes, particularly excelling in major categories like car, person, bird, and chair. However, FGVP is inferior to RedCircle in a few categories like the baseball glove and traffic light. We show all results in **Figure R1 in PDF in the global response**.
|Category|person|car|bird|chair|traffic light|tennis racket|
|:-:|:-:|:-:|:-:|:-:|:-:|:-:|
|Instance Number|10777|1918|427|1771|634|225|
|RedCircle|35.6|23.6|58.8|32.5|67.4|87.1|
|FGVP|76.2 (+40.6)|62.5 (+38.9)|91.6 (+32.8)|62.7 (+30.2)|52.8 (-14.6)|45.3 (-41.8)|
**2)** Detailed failure analysis compared with other methods.\
We visualize the challenging cases in **RefCOCO (Figure R4) and COCO (Figure R5) in the PDF file in the global response**.
- **For the RefCOCO dataset**, we find that all visual prompts performed poorly when the target object is difficult to be recognized due to its semantic gap or weak perceptual difference from the background. For example, in the sample Figure R4 (1), the "white shirt" mentioned in the caption is hardly visible in the image. In sample (2), FGVP grounds part of the "left bicycle" that is not cut off/blocked to the front wheel of the actual bicycle. At the same time, it is considered that the rest of the actual bicycle may belong to another bicycle. This is largely due to perceptual illusion. Samples (3) and (4) typically indicate that the performance bottleneck of FGVP could lie within SAM, yielding inaccurate masks and masks containing unrelated noises.
- **For the COCO dataset**, we observed that small objects can be accurately localized using positive masks rather than blurred masks. This indicates an improvement over the specific usage of semantic masks as FGVP for different types of instances.
**Consequently**, these difficult samples pose a general challenge to the algorithm. Overcoming such challenges becomes an important avenue for our future research. For instance, enhancing the involvement and connection of textual cues and exploring strategies like gathering and tuning visually difficult samples could be explored.
Pdf: /pdf/08173ac45a14671556d22a899f5d43cdd7e040c8.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: The paper proposes Fine-Grained Visual Prompting (FGVP), which uses precise semantic masks from SAM as visual prompts to improve spatial localization of vision-language models (VLMs) like CLIP for instance-level tasks. The key contributions are:
- Systematically study different visual prompting techniques like cropping, boxes, circles, masks, etc. Show that blurring background outside target mask (Blur Reverse Mask) works best.
- Achieve state-of-the-art results on referring expression comprehension benchmarks RefCOCO/RefCOCO+/RefCOCOg, outperforming prior works.
- Demonstrate FGVP can enable zero-shot part detection on PACO dataset without needing any box proposals, again outperforming other prompting methods.
Strengths: Originality: The paper presents a novel idea that using SAM to generate semantic masks from detected bbox as better visual prompts of specific instance in the image, which is an original idea not explored in prior works.
Quality: The overall approach is technically sound. The experiments follow standard protocols and are extensive in studying different visual prompting design. The results demonstrate benefits over existing methods.
Clarity: The paper is well-written and easy to follow. The problem context, proposed method, experiments are clearly explained. Figures and tables aid understanding.
Significance: FGVP pushes state-of-the-art in two important vision-language tasks - referring expression and part detection. The analysis may inspires more research on the properties of VLMs regarding spatial understanding.
Weaknesses: While the paper presents a novel fine-grained visual prompting technique and achieves state-of-the-art results, there are some aspects where the analysis could be strengthened:
More Insights from Study: The paper performs an extensive set of experiments on different visual prompt designs. However, it could provide a more detailed analysis of the inferences and insights derived from this study. For instance, comparing the gap between using ground truth and predicted bboxs would give insights into the impact of mask quality. Explaining the differences in various design choices (VP, PP, proposals etc.) in Table 3 would be informative. Attention visualizations could help reveal why fine-grained prompting is more beneficial.
Computational Overhead: The paper proposes generating semantic masks using SAM models. However, the computational overhead this introduces is not analyzed. Reporting inference times, scalability, etc. would provide a better understanding of its practical viability.
Joint Language and Visual Prompting: The study is currently limited to exploring only visual prompts. Evaluating prompts on both visual and textual modalities could offer a more comprehensive understanding of VLMs. It is good that authors discussed this point in the limitation though.
Granular and Failure Analysis: Providing per-class performance breakdowns and detailed failure analysis compared to other methods through examples would provide useful insights into where the improvements come from.
In summary, while the core ideas are promising, performing a more thorough empirical analysis along the above dimensions would strengthen the paper and provide a better understanding of the factors behind the efficacy of fine-grained visual prompting.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: See the main suggestions in weakness.
- For the RegCOCO/RefCOCO+/RefCOCOg results in Table 4, which subsets are the numbers reported on?
- Line 288-290 if there are better grid size / NMS thresholds, how do we still use the suboptimal default ones?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: Limitations are discussed in the paper
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the comments and suggestions!
**Q1**: More insights from the study.\
**A1**: \
**1)** Compare the gap in mask quality caused by using ground truth and predicted boxes.\
**Firstly**, Table 2 in the main paper contrasts results with masks derived from ground truth (left side) and proposal boxes (right side).\
**Secondly**, we broadened experiments by employing SAM at varied scales and an unsupervised mask generator from FreeSOLO, yielding diverse mask qualities. We present the performance of FGVP with the Blur Reverse Mask prompting. Generally, FGVP achieves the highest accuracy with ground truth masks, reinforcing the rationale for "Using Ground Truth as Performance Upper Bound." Larger SAM backbones typically yield better masks, resulting in higher performance, while the unsupervised mask generator results in the lowest performance for poor mask qualities.
|Mask source|SAM scale|COCO|PACO|RefCOCO|RefCOCO+|RefCOCOg|
|:-:|:-:|:-:|:-:|:-:|:-:|:-:|
|GT mask|\--|67.8|42.7|52.8|58.0|63.5|
|GT box|huge|68.0|39.5|52.3|56.1|62.2|
|GT box|large|67.9|39.8|51.6|55.6|61.3|
|GT box|base|67.4|39.1|52.1|55.2|60.8|
|UNINEXT proposal|huge|\--|\--|52.8|55.4|57.8|
|UNINEXT proposal|large|\--|\--|52.2|54.8|57.5|
|UNINEXT proposal|base|\--|\--|52.1|54.5|58.8|
|Mask generator of FreeSOLO|\--|17.4|11.6|27.7|30.5|38.2|
**2)** Design choices with the visual prompt (VP), post-processing (PP), proposals, etc. in Table 3.\
**Above all**, the guiding principle for all settings is to maintain consistency with the compared works.
**To be specific**:
- **CPT and our codebase** focus on individual VP performance without PP. We use UNINEXT and MAttNet proposal banks to demonstrate the robustness of our enhancements. It's important to note that different proposal selections solely affect the box candidates, which are equitably shared among all the comparison prompting methods.
- **For the ReClip codebase**, the ReClip employs cropping and colorful boxes as visual prompts, with default spatially-relations post-processing. To ensure a fair comparison, we **first** add cropping as an ensembled VP to all experiments. **Next**, to facilitate comparison with RedCircle (which inherently uses Score Subtraction as post-processing and primarily ensembles based on three circle VPs, as summarized in Table 1), we adopt the same three types of prompt formats but based on semantic masks. **Finally**, we aim to explore a higher performance possible under various VP and PP ensembles.
**3)** Attention visualizations between fine-grained prompting and previous methods.\
Following the pipeline established in DINO, we present attention visualizations in **Figure R3 of the PDF in the global response**. We compare Crop, RedCircle, and FGVP using the RefCOCO datasets. The green number in the top left corner denotes the similarity score and is arranged in descending order. Each correct prediction is indicated by a red rectangle.\
The visualization results show the reasons for the superior performance of FGVP and the limitations of RedCircle. This is because FGVP demonstrates a more reasonable behavior in terms of reducing attention to weakly correlated or distracting backgrounds and enhancing focus on the target object, especially small objects. While RedCircle can also alter attention allocation, the degree is quite limited, resulting in inferior performance. \
At the same time, the attention analysis effectively validates the main perspective of our paper regarding the capability of the reverse blur mask to reduce attention on weakly correlated pixels and enhance focus on the main subject. Therefore, the proposed FGVP holds better potential in the field.
**Q2**: Experiments on the computational overhead, inference times, and scalability.\
**A2**: Please refer to the **global response A1 and A2** for detailed information.
**Q3**: Joint Language and Visual Prompting.\
**A3**: We appreciate your insightful comment and constructive guidance and have included two experiments for better understanding.\
**1)** **Firstly**, there is a general approach to text prompts that employs SPACY to extract nouns from captions. For instance, given a caption like "a dog on the table," the noun extracted would be "dog." Subsequently, the similarity score $S$ can be derived through an ensemble of the caption score $S_c$ and the prompted noun score $S_n$ : $S = r \times S_c + (1 - r) \times S_n$, where $r$ represents the balance ratio.
||caption ($r$=1)|noun ($r$=0)|ensemble ($r$=0.5)|
|:-:|:-:|:-:|:-:|
|example|a dog on the table|dog|\--|
|FGVP|52.8|53.1|54.4|
**2)** **Furthermore**, we have explored strategies for rephrasing text based on visual prompts. For instance, when presented with an image prompt featuring a blurred background, the original text "a dog on the table" could be rephrased as "a dog on the table with a blurred background".
||caption|<caption> with blurred background|
|:-:|:-:|:-:|
|example|a dog on the table|a dog on the table **with blurred background**|
|FGVP|52.8|54.8|
**Q4**: Per-class performance and comprehensive failure analysis compared to other methods.\
**A4**: Please refer to the **global response A3**.
**Q5**: Other issues.\
**A5**: \
**1)** The performance reported in Table 4 is derived from the validation set of all datasets.\
**2)** Using the suboptimal settings in Table 4 while there are better grid size / NMS Thresholds.\
The grid size and NMS threshold we employed are default settings used in the official codebase of SAM. We avoided a higher setting due to increased inference cost and lengthy times, as discussed in **global response A2**. \
**Regarding your query**, an ablation study with larger settings (NMS threshold = 0.9, grid size = 32) is included, showing the continued superiority of our FGVP.
||PACO|RefCOCO|RefCOCO+|RefCOCOg|
|:-:|:-:|:-:|:-:|:-:|
|Crop|19.5|17.4|22.1|35.3|
|RedCircle|19.9|25.9|31.0|35.0|
|FGVP|23.2|42.1|46.0|50.7|
---
Rebuttal Comment 1.1:
Title: Reviewer comment after rebuttal
Comment: I really appreciate the author's rebuttal. Most of my concerns are addressed. Therefore, I raise my score by one. However, I do think the paper could go a bit deeper when it comes to the insights from the empirical studies on which prompting works best.
Take for example the "middle zebra" and "second guy from right white shirt" mentioned in the rebuttal attachment. These seem to hinge on understanding spatial relationships. Perhaps they'd benefit more from a global context than just crop-based prompting. But with blurring, it seems we're still cutting down on that global context info, making things trickier.
Also, I'm curious about the per-class performance analysis - why does FGVP fall short compared to RedCircle for things like the baseball glove and traffic light? Digging deeper into these questions and sharing insights would really help in understanding FGVP better. It would be great to get a clearer picture of what makes one visual prompting strategy stand out over another.
---
Reply to Comment 1.1.1:
Title: Response to Reviewer Xx88
Comment: We greatly appreciate your valuable feedback, as well as the time and dedication you invested in thoroughly reading and comprehending both our paper and our response. Here is some more analysis regarding your question.
### 1) Understanding of spatial relationships.
**Essentially**, the referring captions can be separated into two parts: **relative position description** and **object nouns**. Changing from cropping to blurring can preserve the relative positional relationships while also reducing background noise. However, it does indeed decrease the global contextual information, leading to less confidence in terms of "spatial position" compared to the vanilla image. **Yet**, for the confidence in "object nouns," I think both RedCircle and the original image would fall behind, as the model would need to handle more focal attention on other objects.
Taking the example of "the second guy from the right wearing a white shirt," without blurring, the model would also have to consider the confidence of "object nouns" across multiple background objects, thereby affecting the scoring on the actual target. **Therefore**, I think blurring **achieves a good trade-off in terms of confidence in both relative position description and object nouns**.
### 2) Per-class performance analysis.
**Empirically**, blurring doesn't perform well when identifying small target objects.
**To find out**, we further computed the average proportion between the target size and the total image size for each category. The results reveal that RedCircle outperforms blur in categories where the instances occupy only 3% of the image, while categories where blur is superior occupy around 10% of the image size.
**Moreover**, from **Figure R5 in the PDF of the global response**, we can observe that although both methods utilize fine-grained masks as the foundation for prompting, with blur, the emphasis is on weakening the background to highlight the target, while the other approach involves applying colorful masks to the target area for positive marking. This visualization indicates that **for small target objects, the positive mask-type annotations tend to yield better results**. **Notably, RedCircle is also a positive marking strategy**. Categories such as "baseball glove" and "traffic light," which, on average, only occupy 0.9% and 0.7% of the entire image, respectively, fall into the category of extremely small objects.
**In fact**, when we employ positive masks as visual prompts, we achieve superior results compared to RedCircle for these two categories. This particular design concept for visual prompts is summarized in Section 3.2 of the paper. | null | null | null | null | null | null |
Relative Entropic Optimal Transport: a (Prior-aware) Matching Perspective to (Unbalanced) Classification | Accept (poster) | Summary: The paper proposes an inverse Relative Entropic Optimal Transport (RE-OT) point of view for classification problems. The paper then proposes to use inverse RE-OT with a time-varying prior for solving long-tailed classification problems. Evaluations show improvements compared to existing baselines such as vanilla softmax and balanced softmax on long-tailed tasks.
Strengths: The reviewer believes the main strengths of the paper are solid theory and novelty of the framework. Specifically,
- The OT and novel RE-OT theory is introduced clearly, with provided visualizations and explanations to aid understanding.
- The novel RE-OT theory makes a nice connection and a unified view for many existing classification methods, as summarized in Table 1.
- Experiment results show some advantage compared to existing methods.
Weaknesses: The reviewer mainly has concerns with the impact of the work. In particular,
- Some additional intuitions would be helpful for the reviewer on why RE-OT gives a good or useful way for tackling classification problems. Currently, the RE-OT framework is laid out clearly, and the authors proposed to apply RE-OT to solve long-tailed classification, but the paper does not make clear what methodological or practical benefit the proposed framework has.
- Comparing different methods in Table 1, the newly proposed method only differs from Balanced-Softmax in the choice of Q slightly, and the proposed choice of Q is not properly motivated in my opinion. The work also does not compare with using epoch-varying losses or two-stage approaches for other methods.
- The evaluations show relatively minor improvements compared to existing works, and most experiments do not report error bars so it is unclear if most improvements are significant.
The work also has some minor issues with clarity:
- The work uses $n$ for both number of iterations and dimension of polytope $\Sigma_n$ which can be confusing.
- Proposition 3 appears to be misplaced.
- The notations in Figure 2 and equation (10) are quite hard to understand.
Technical Quality: 4 excellent
Clarity: 2 fair
Questions for Authors: - How does (15) relate to (10)?
- Are (17) and (18) unique to the OT framework?
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 2 fair
Contribution: 2 fair
Limitations: The authors have addressed one of the limitations of the work, i.e. the assumption of known label distribution of test data. The authors have not addressed potential negative societal impacts of their work as it is mostly theoretical and methodological.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We truly appreciate the time and effort you have spent reviewing this paper. Below are our responses:
>***Q1: the paper does not make clear what methodological or practical benefit the proposed framework has.***
We think that our (RE-)OT framework for classification is highly motivated and valuable. Although we mentioned in the future work section that the OT perspective might help approach open-set problems, we can elaborate more on this direction if you are willing to hear about it. Here are a few directions we can do from the OT perspective yet less possible in the traditional Bayesian way:
**1. Change in the coupling constraint $\\{ P: P1=a \\}$**. We can adopt other constraints instead of $\\{ P: P1=a \\}$, such as using the constraints in (modified) Optimal Partial Transport (OPT) with $\\{ P: P1 \leq a,1^\top P1=s \\}$. This allows the model that is not just choosing one from all classes and enables rejection of classification if the model determines that a sample cannot be classified into any of the candidate labels.
**2. Change in label selection in one model**. In the OT perspective, classification is not understood in terms of $P(y|x)$ but rather from a matching perspective. Therefore, when we have representations of samples and labels, we can freely choose different label sets to form different classification tasks. For example, during training, we classify into ${y_1,y_2,\dots,y_{10}}$, but during testing, we classify into ${y_3,y_7,y_{10}}$. Additionally, if the label representation is based on NLP embedding models, we can include previously unseen labels in the classification testing phase, potentially addressing zero-shot learning problems without the need for additional parameter training.
You can see more in the A3 response of Reviewer RCtv (due to the length limitation).
Because of the various variants and theories within OT, considering classification as OT naturally allows us to incorporate existing OT knowledge into the field of classification, which is the ongoing work we are conducting. Only through this OT perspective, it becomes possible in the future to develop models that can "Classify Anything," including different task selections, variable classification constraints, and more in one model. Therefore, the main purpose of this paper is to provide a different perspective on classification models and offer conceptual assistance to general recognition models.
>***Q2: "Comparing different methods in Table 1, the newly proposed method only differs from Balanced-Softmax in the choice of Q slightly, and the proposed choice of Q is not properly motivated in my opinion. The work also does not compare with using epoch-varying losses or two-stage approaches for other methods."***
A2: Yes, we agree that our proposed choice of Q is not properly motivated and the newly proposed method only differs from Balanced-Softmax in the choice of $Q$ slightly though our main purpose is to show the conclusion that classification is OT. However, However, we believe that one day, one will be able to discover a more suitable to enhance representation learning. In fact, we are currently adopting a new $Q$, which is computed using a vanilla softmax-trained "teacher" model. We present the results in the uploaded PDF in Tab. 3 and it is interesting to find a great improvement for the head classes, which inspires us to further explore the choice of $Q$.
>***Q3: "The evaluations show relatively minor improvements compared to existing works, and most experiments do not report error bars so it is unclear if most improvements are significant."***
A3: Thanks. We train the different models 10 times on CIFAR100 and the results are in Tab. 2 in the PDF file.
>***Q4:How does (15) relate to (10)?***
A4: The following are our thoughts that relate Eq.15 to Eq.10.
Eq.10 provides a deblurred image by gradually optimizing Q, indicating that Q does not necessarily need to be set as a fixed matrix. Building upon the observation that a two-stage approach, where vanilla softmax is trained first and then the modified loss is trained, can yield good results, it is worth noting that directly training with the modified loss (e.g., Balanced softmax without vanilla Softmax pretraining) may lead to poorer performance. This suggests that different training stages require varying degrees of correction. Therefore, we propose Eq.15 as our loss function.
>***Q5:Are (17) and (18) unique to the OT framework?***
A5: To the best of our knowledge, yes, we have only encountered relevant definitions and formulas within OT. In fact, Eq.17 investigates the transformation of features between two different spaces based on coupling. It may not be immediately apparent to draw a direct connection between this problem with the traditional Bayesian perspective, which primarily focuses on studying the conditional probability $P(y|x)$.
>***Q6:The work uses n for both number of iterations and dimension of polytope $\Sigma_n$ which can be confusing.***
A6: Thanks for pointing it out. We will correct it in the new version.
>***Q7:Proposition 3 appears to be misplaced.***
A7: Prop. 3 mainly follows the dual form of Entropic OT. Please see Proposition 4.4 in [1] (Page 77 if in the same version)
[1] G. Peyre and M. Cuturi. Computational optimal transport. Foundations and Trends in Machine Learning, 11(5-6):355–607, 2019.
>***Q8:The notations in Figure 2 and equation (10) are quite hard to understand.***
A8: Figure 2 talks about the barycenter calculated by Eq. 9 between noise and a leopard image. The first row is the results based on Entropic OT, where the barycenter images are blurred. The following two rows are RE-OT-based results setting different $Q$. We will polish the image and description. Thanks.
For Eq. 10, we have interpreted in the response of A1 for the Reviewer RCtv. Due to the limited length of rebuttal, we don't repeat our answers. Thank you for your understanding.
---
Rebuttal Comment 1.1:
Comment: Thank the authors for their response. For Q1, the proposed usages of the RE-OT framework are indeed interesting and potentially fruitful. I can now agree with the authors that the proposed OT view of classification could be valuable. However, it is also the case that these newly proposed ideas have not been demonstrated in this work, so it is still unclear if they are actually useful. For Q2, it appears that the teacher model caused a quite drastic decline in accuracy in the Few case, which is somewhat unintuitive. I still do not understand the answer to Q4 and Q8, a confusion which seems to be shared by other reviewers. The rest of the answers are fine.
Overall, I think the framework proposed in this work is of interest to NeurIPS but many parts of the work can be improved (in particular clarity and better motivation). I have read the other reviewer's comments as well as the author's rebuttal. I lean towards keeping my score unchanged, but I would look forward to see further clarifications from the authors on Figure 2 and equation (10), as well as responses from other reviewers.
---
Reply to Comment 1.1.1:
Comment: Thank you very much for your response. Here is our new reply:
>***Q1: "However, it (OT view for classification) is also the case that these newly proposed ideas have not been demonstrated in this work, so it is still unclear if they are actually useful."***
A1: We conducted a simple experiment to further demonstrate the usefulness of viewing classification as OT. In the inference phase on the testing data, we replaced the softmax (i.e. constraints within $U(\mathbf{a})$) with Sinkhorn algorithm (i.e. constraints within $U(\mathbf{a},\mathbf{b})$), where $\mathbf{b}$ represents the assumed ratio in the testing data (e.g., long-tailed, uniform, or reverse long-tailed distribution). The accuracy (\%) results are shown below:
| Models:valilla softmax CE Loss | LT | Uniform | Reverse LT |
| ------------------------------- | ---- | ------- | ---------- |
| Inference with Softmax | 58.7 | 40.8 | 23.5 | |
| Inference with Sinkhorn | 59.1 | 46.6 | 39.6 |
| Models:our RE-OT based Loss | LT | Uniform | Reverse LT |
| --------------------------- | --- | ------- | ---------- |
| Inference with Softmax | 60.4 | 47.4 | 35.1 | |
| Inference with Sinkhorn | 60.4| 47.9 | 40.9 |
In the above experiments, we reconfigured the testing data to have long-tailed (IF=10), uniform (unchanged), and reverse long-tailed (IF=10) distributions on CIFAR100. We found that using Sinkhorn as the inference method during the testing process can lead to significant improvements, particularly for the model trained with vanilla softmax. These experiments validate the effectiveness of treating classification as Optimal Transport during the testing inference stage.
>***Q2: Clarifications on Figure 2 and equation (10) .***
A2: Thanks. We can provide further clarifications in the form of a logical chain.
Assumption: Image $a$, Image $b$ and Barycenter $c$
1) $P^\epsilon$: the transport $a\to b$ and $(P^\epsilon)^\top$: the transport $b\to a$
2) $P_1$: the transport $a\to c$ and $P_2$: the transport $b\to c$
3) the regularization $H_Q(P)$ push the solution $P$ to $Q$
4) If we set $Q=P^\epsilon$, $P_1$ will be more close to $P^\epsilon$, making $c$ more close to $b$.
(The reason is $(P^\epsilon)^\top 1_n =b$, $P_1$ is close to $P^\epsilon$, then $P_1^\top 1_n$ is close to $b$)
5) If we set $Q=(P^\epsilon)^\top$, $P_2$ will be more close to $(P^\epsilon)^\top$, making $c$ more close to $a$.
6) We set $Q= \lambda P^\epsilon+(1-\lambda)(P^\epsilon)^\top$, $c$ will be more close to $a$ and $b$. Specially, when $\lambda\to 0$, then $Q=(P^\epsilon)^\top$. We can get $c$ more close to $a$ from (4), while $\lambda\to 1$, then $Q=P^\epsilon$, we can get $c$ more close to $b$ from (5). | Summary: The authors propose Relative Entropic Optimal Transport, which allows to incorporate prior information matrix to the learning of optimal transport plan. After studying its theoritical properties, they adapt REOT to the long-tailed classification problem and establish the connection between optimal transport and classification. Finally, they illustrate the effectiveness of the proposed method via experiments on long-tailed classification and representation learning.
Strengths: The authors have well established the connection between REOT and various classification loss functions. They also clearly demonstrate how REOT can be used in long-tailed problem. The proposed method also shows strong performance in many long-tailed experiments.
Weaknesses: I would say the contribution to the OT theory is quite limited and incremental because all theoritical results of REOT are straightforward adaptation from the EOT. The Prop 1 and its proof need serious revision because there are various typos, which leave me doubt on the correctness of the theoritical result and proof.
- In Eq 20, 21, there is no consistent use of f and f', g and g'.
- In Eq 21, there should be no epsilon. This makes the form of Qtilda incorrect because it depends on epsilon.
- In Eq 22, 23, there is no epsilon.
- In Eq 23, 24, there is no consistent use of f and f', g and g'.
- Typos in Eq 25.
- In Eq 25, I fail to see the last equality and to see why we can set f and g in such ways to get the equality. I feel like this is a circular reasoning.
- In Prop 1, the authors state that when Qtilda = a otimes b, then the solution of REOT and EOT coincides (which is not true in general), while in the proof, they consider Qtilda = one matrix, which makes the two solution coincide.
- There are also typos in Eq 42, 43.
Despite some concerns on the theoritical result, I feel that it can be fixed without impacting the effectiveness and soundness of the proposed method.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: - I don't really understand the context of Figure 4: can the authors provide more description of the dataset and experiment?
- Small remark: the authors might also want to add relevant reference to the barycentric projection:
1) Ferradans, S.; Papadakis, N.; Peyré, G.; and Aujol, J.-F. 2014. Regularized Discrete Optimal Transport. SIAM Journal on Imaging Sciences, 7(3): 1853–1882.
2) Courty, N.; Flamary, R.; Tuia, D.; and Rakotomamonjy, A. 2016. Optimal transport for domain adaptation. IEEE Transactions on Pattern Analysis and Machine Intelligence, 39: 1853–1865.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: The authors do discuss the limitations of their work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for spending your valuable time on this paper. We hope that our responses can address the concerns you may have.
>***Q1: "In Prop 1, the authors state that when Qtilda = a otimes b, then the solution of REOT and EOT coincides (which is not true in general), while in the proof, they consider Qtilda = one matrix, which makes the two solution coincide."***
A1: Thank you for the comments. We apologize for the misunderstanding. We have corrected the errors in the proof of Prop. 1, and you can see the updated version in the uploaded PDF if you are interested. Here are our responses to your points:
1. $\tilde{Q}$ belongs to $U(a,b)$ due to the optimization problem $\tilde{Q}=\arg\min_P -H_Q(P), s.t. P\in U(a,b)$.
$\quad$ So, one can set $Q=\mathbf{1}_{n\times m}$ or ${Q}=a\otimes b$.
$\quad$ But it is wrong to set $\tilde{Q}=\mathbf{1}_{n\times m}$.
$\quad$ Specifically, when $Q=\mathbf{1}_{n\times m}$, we can simply obtain its corresponding $\tilde{Q}=a\otimes b$.
2. Most papers adopt $Q=\mathbf{1}_{n\times m}$ for the entropic formulation. However, a few papers do use the form ${Q}=a\otimes b$. Please see [1]. These two formulations are exactly equivalent, and the purpose of Prop.1 is to demonstrate that different $Q$ can have equal RE-OT solutions when they have the same $\tilde{Q}$.
$\quad$ We have provided simple numerical experiments **in Fig. 2** in the uploaded PDF to show the **equivalence of $P_{Q}$ and $P_{\tilde{Q}}$** .
3. Sorry for the typos in the proof that may cause misunderstandings. We make corrections to the proof of Prop. 1 **in the uploaded PDF (Fig. 1)**.
[1] Statistical bounds for entropic optimal transport: sample complexity and the central limit theorem, NeurIPS 2019.
>***Q2: "I don't really understand the context of Figure 4: can the authors provide more description of the dataset and experiment?" ***
A2: Thank you. The following is our description of the dataset and experiment of Fig.4. Exactly, in the experiment of Fig.4, we first train the models using vanilla softmax, focal loss, and our loss on the Cifar10-LT dataset. We consider the logits as the sample features (in theory, we could also choose intermediate features of the network instead) and save the logits from the three head and three tailed classes, along with the corresponding predicted couplings, as matrices.
Then, we calculate the barycenters of the labels using Eq.18, which effectively computes the weighted average of the features. We concat the logits and barycenters as a new matrix and use it to calculate t-SNE results for the training data, as shown in the first row of Fig.4. For the testing data, we concat the testing logits with the barycenters calculated from the training data and use t-SNE dimensional reduction, as shown in the second row in Fig.4. By comparing the first row and the second row of Fig.4, we can observe the differences in the positions of the barycenters, which are actually caused by shift in the feature distributions.
>***Q3: About typos and remark about adding relevant reference to the barycentric projection***
A3:Thanks again. We will correct our mistakes and add barycentric projection references.
---
Rebuttal Comment 1.1:
Title: Response to the authors
Comment: I thank the authors for their reponse.
I will leave more comments in the general discussion. | Summary: This paper addresses the problem of unbalanced classification using a variant of optimal transport called Relative Entropic Optimal Transport (RE-OT), which alter the Sinkhorn distance with the prior information into the coupling solution. Experimental results across different domains validate the efficacy of their approach.
Strengths: The paper is easy to follow. The idea of using Barycentric projection used in previous work in domain adaptation with OT has been adapted the context of (unbalanced) classification.
Weaknesses: The proposed method much depends on the prior distribution Q which is supposed as the label distributions of the data. However, specifying correct Q is impossible in the reality. The author should show the effects of wrongly specified Q by experimental results. Otherwise, the proposed method has little applicable in the real world application.
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: A recent work [1] has introduced a principled framework for using optimal transport for class-Imbalanced classification problem. Could you please differentiate the current work with this work in terms of theorical and experimental (if time allows) aspects.
[1] Jin, Lianbao, Dayu Lang, and Na Lei. "An Optimal Transport View of Class-Imbalanced Visual Recognition." International Journal of Computer Vision (2023): 1-19.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 3 good
Contribution: 3 good
Limitations: The paper already mentioned its limitation: "It assumes known prior label distribution testing data, which is often unknown in real-world scenarios"
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you very much for taking the time to review this paper. Below are our responses.
>***Q1: Difference to [1]***
A1: [1] views neural networks as mappings in optimal transport and proposes that these mappings need to adapt due to the difference between the training and testing label distributions. In terms of technical details, it primarily investigates the dual form of Kantorovich OT and establishes a connection between the gradient of the Kantorovich functional and cross-entropy. Modifying the Kantorovich functional for the inconsistency between the training and testing label distributions can improve testing predictions. In contrast, our paper shows the following differences:
1. We do not start from the dual form but directly focus on the primary OT formulation, leading to the conclusion that Softmax is a special case.
2. The theoretical approximation proposed in [1] relates gradients between cross-entropy and the Kantorovich functional, whereas our equivalence lies between Softmax and Kantorovich coupling, which is the main difference.
3. Our paper is not limited to long-tailed problems alone. The main purpose of Section 4 is to illustrate that modified Softmax-CrossEntropy or other classification losses, from the OT perspective, can be viewed as different OT settings. Consequently, we can conclude that classification can be understood as OT. In future works, we can then incorporate other theoretical advancements of OT into classification tasks. We provide some examples of using the OT perspective to improve classification in the response to Reviewer Cbtk in A3.
>***Q2: About the $Q$ in real world.***
A2: Yes, RE-OT is dependent on the prior $Q$ which may not exist in real-world. Here we give our ideas that may address your concerns:
1. The optimal $Q$ is indeed unknown, and the known label ratios serve as a surrogate choice for (long-tailed) classification. However, we believe that one day, one will be able to discover a more suitable $Q$ to enhance representation learning. In fact, we are currently adopting a new $Q$, which is computed using a vanilla softmax-trained "teacher" model. We present the results in the uploaded PDF in Tab. 3 and it is interesting to find a great improvement for the head classes, which inspires us to further explore the choice of $Q$.
2. The purpose of Sec.4 is mainly to give the perspective that the classification is OT. Many variants of OT can be introduced to the classification problem with this OT perspective (e.g. optimal partial transport with the constrain $\\{\mathbf{P}\mathbf{1}\leq \mathbf{a}, \mathbf{1}^\top \mathbf{P}\mathbf{1}=s\\}$. See more examples in A3 of the rebuttal for reviewer RCtv.
---
Rebuttal 2:
Comment: Dear Reviewer pKsm:
As the discussion deadline approaches, we would greatly appreciate it if you could provide feedback on our response regarding addressing the concerns. We are also open to any further questions or suggestions that you may have.
Thank you once again for your time and attention. | Summary: This works proposed a new view to the classification problem through the lens of Optimal Transpot. They propose a new variant of OT, says Relative-OT, by changing the KL regularizer constraints in a given distribution. By some special properties of KL divergence, the works show a relationship betwen the entropic relative OT and the entropic OT in Proposition 1 and 2.
In the application of finding barycenter of multiple distributions, the authors proposed to iteratively change the relative distribution in the regularization to obtain a "smooth" (maybe meaningful) barycenter.
Another application is the long-tailed recognition problem. They define a cost matrix based on the logit value of the prediction neural network. Then they view the classification problem like a matching problem between data and labels. They aim to find the transport plan satisfying both constraints: one is from the true label, and one is from the "transport cost" between data and probability value of a NN. For a long-tail issue, a relative distribution is set adaptively in the training process.
Section 5 shows the experiment results on LongTail data set and unbalanced data sets, like CIFAR10-LT, CIFAR100-, ImageNet_LT, molecule data etc.
Strengths: I believe that the idea of using OT in the classification problem as presented is new.
Experiment results show an improvement in performance compared to other methods but not in all cases.
Weaknesses: I myself still do not get the justification for using the OT in classification problem. I mean all the use of relative distribution, iteration, restate the classification as two problems of finding optimal plan and distribution etc. I see some toy example of blur images for explanation, but apart from that I do not find any others, maybe I miss some parts of the paper. Here I mean the theory and intuition behind it, rather than just assemble OT and classification problem in the presented way. When more parameters involves in the training process of the model, it is more likely that we will obtain better performance.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. I do not understand the claim in equation (10). Could the author elaborate it?
2. In Table 3, case of Many, the performance of proposed method is worse than some other methods. Could the authors provide explanations?
Minors: typos Qij line 164, proposition 1: "and are"
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: It is fine.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your valuable time in reviewing this paper. Here are our responses.
>***Q1: "I do not understand the claim in equation (10). Could the author elaborate it?""***
A1: Certainly. Here is our explanation. Assume that $P^\epsilon$ is a coupling from image $a$ to image $b$, and we can obtain the normalized pixel values $b_j=\sum_i P^{\epsilon}_{ij}$.
When $b_j$ is small (indicating a white region at position $j$), the corresponding entries $P^{\epsilon}_{:,j} $ will also be small.
Consequently, when computing the barycenter $c$, by setting $Q=P^{\epsilon}$ as the coupling from $a$ to $c$, the solution $P_{:,j}$ will be influenced by $Q$, resulting in a small value for $c_j$. This leads to a non-blurry image. Considering the transportation from $b$ to $a$, we interpolate between $P^{\epsilon}$ and its transpose to obtain the final coupling matrix $Q$.
>***Q2: In Table 3, case of Many, the performance of proposed method is worse than some other methods. Could the authors provide explanations?***
A2: This scenario is quite common also for many peer methods in long-tailed experiments. Essentially, it represents a tradeoff between the head and tailed samples. To achieve higher accuracy for the tailed classes, there is a certain sacrifice in accuracy for the head class. However, the accuracy of the whole data tends to improve as a result.
>***Q3: "I myself still do not get the justification for using the OT in classification problem."***
A3: We think that viewing classification as OT is highly valuable. Although we mentioned in the future work section that the OT perspective might help approach open-set problems, we can elaborate more on this direction if you are willing to hear about it. Here are a few directions we can do from the OT perspective yet less possible in the traditional Bayesian way:
**1. Change in the coupling constraint $\\{ P: P1=a \\}$**. We can adopt other constraints instead of $\\{ P: P1=a \\}$, such as using the constraints in (modified) Optimal Partial Transport (OPT) with $\\{ P: P1 \leq a,1^\top P1=s \\}$. This allows the model that is not just choosing one from all classes and enables rejection of classification if the model determines that a sample cannot be classified into any of the candidate labels.
**2. Change in label selection in one model**. In the OT perspective, classification is not understood in terms of $P(y|x)$ but rather from a matching perspective. Therefore, when we have representations of samples and labels, we can freely choose different label sets to form different classification tasks. For example, during training, we classify into ${y_1,y_2,\dots,y_{10}}$, but during testing, we classify into ${y_3,y_7,y_{10}}$. Additionally, if the label representation is based on NLP embedding models, we can include previously unseen labels in the classification testing phase, potentially addressing zero-shot learning problems without the need for additional parameter training.
**3. Generalization of Softmax using other regularization.** The current softmax formulation is essentially based on the Entropic Regularization of OT. However, OT regularization goes beyond just the entropic regularization and includes other regularizations such as L2, Tsallis entropies, or divergence-based ones. This opens up possibilities for generalizing softmax.
**4. Classification cross-representation models using Gromov-Wasserstein Distance.** From the OT perspective, classification between the features of samples and labels, based on the Gromov-Wasserstein Distance, can be performed in different feature spaces. For example, the sample features may be in 100 dimensions while the label features are in 20 dimensions. This can be helpful for preserving privacy as Gromov-Wasserstein Distance only requires similarity between samples and labels.
Due to the various variants and theories within OT, considering classification as OT naturally allows us to incorporate existing OT knowledge into the field of classification, which is the ongoing work we are conducting. Only through this OT perspective, it becomes possible in the future to develop models that can "Classify Anything," including different task selections, variable classification constraints, and more in one model. Therefore, the main purpose of this paper is to provide a different perspective on classification models and offer conceptual assistance to general recognition models.
>***Q4: About Typos.***
A4: Thank you for your feedback. We will make corrections in the final version.
---
Rebuttal Comment 1.1:
Title: reply to the authors
Comment: I would like to thank the authors for their answers. Because I am not satisfied with the current presentation form of the paper, I will keep my score unchanged. | Rebuttal 1:
Rebuttal: We sincerely appreciate the reviewers for investing their valuable time and providing insightful comments on our paper. Overall, the reviewers found our work to be novel (Cbtk, RCtv), with promising experimental results (Cbtk, RCtv, HboX), and easy to follow (pKsm). Additionally, they acknowledged the theoretical properties (Cbtk), the meaningful barycenter calculation (RCtv), and the connections between REOT and various classification losses (46Rh, HboX). We are grateful for the recognition and encouragement from the reviewers, and their comments have inspired us to further improve our work.
Simultaneously, the reviewers have provided several critical insights and improvement suggestions. In response to these, we have carefully considered their feedback and provided our response. For instance, one common concern raised by multiple reviewers is the practical significance of viewing classification as OT. We have addressed this concern by offering explanations and providing several examples to illustrate the interesting and meaningful aspects of this perspective. We firmly believe that our work will bring further inspiration to the readers.
Pdf: /pdf/e0aa435bcdec0fc6e01f3e078ad848169389cf1d.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: The paper introduces the relative entropic (RE) regularization for optimal transport (OT) problems. Instead of the traditional entropy regularizer (Cuturi'13), the RE-OT is defined through a given prior matrix $\boldsymbol{Q}$, which guides the matching between source and target distributions. Using the approach of Inverse RE-OT, the authors derive a similar form to softmax-based cross-entropy loss to train long-tailed data in computer vision pipelines.
Strengths: - The paper introduces RE-OT, a new variant of entropic regularization for OT.
- The RE-OT enjoys some theoretical properties including a static Schrodinger form and a dual formulation as for the standard entropic regularizer.
- The paper investigates the Inverse RE-OT for long-tailed recognition tasks and shows that these tasks can be formulated as matching perspectives with OT.
- Extensive numerical experiments on image classification and molecule classification.
Weaknesses: - The RE-OT is strongly dependent on the prior $\boldsymbol{Q}$, that can be an additive hyperparameter.
- The Inverse RE-OT approach is limited since one has to have a good knowledge of the supervision $\tilde{\boldsymbol{P}}.$
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: ### Questions:
- Does the prior guide $\boldsymbol{Q}$ belong, in general, to the polytope $U(a,b)$?
- In the experiments, how the parameter $\epsilon$ was chosen?
- During training, the constant $c$ is it fixed? How can guarantee the positiveness of $\boldsymbol{C}_{ij}$?
- In L554, the last approximation is obtained by a Taylor expansion?
### Typos
Below I list some typos to be corrected:
- L38: involves learning the Inverse OT: add a reference
- L105: $\tilde{\boldsymbol{P}}$ is not defined
- L163: bold $Q$
- L164: $\tilde{\boldsymbol{Q}}_{ij}$
- L165: ${\boldsymbol{P}}^\epsilon_{\tilde{\boldsymbol{Q}}}$
- L173, L175: bold $K$
- L180: in Eq (10), there is no transpose
- L184: as $\lambda$ changes.
- L192 - L202: avoid to use $n$ as the iterated index ($^{(n)}$)
- L210: constant $c$ is not commented
- L211: in Eq (14), the cost matrix $\boldsymbol{C}$ depends in $\theta$
- L222: is an epoch
- L240: in Eq (16) there is a dependency on $\theta$. Also, a minus sign and a factor $n$ are missing, i.e.
\begin{equation}
\min_{\theta} L = - \sum_{i,j} \tilde{\boldsymbol{P}}_{ij} \log \frac{\cdots} {n \cdots}
\end{equation}
- L258: deep graph matching: add a reference
- L282, L283: croponding --> corresponding
- L503: Eqs (22) and (23): missing dependency on $\epsilon$
- L507: multiplier --> multipliers
- L507: $\tilde{\boldsymbol{Q}}_{ij}$
- L508: in Eq (25), $\boldsymbol{P}^\epsilon_Q = \cdots {\boldsymbol{Q}} diag(e^{g'/\epsilon})$
- L510: \boldsymbol{Q}
- L522: $-\boldsymbol{C}_{ij}$
- L542 and L543: depedency on $\theta$
- L547: Eq (44)
$$
\min_{\theta} L = - \sum_{i,j} \tilde{\boldsymbol{P}}_{ij} \log \frac{\cdots} {n \cdots}
$$
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: NA
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate your valuable time and comments and we hope the following answers can address your concerns.
>***Q1: Does the prior guide Q belong, in general, to the polytope U(a,b)***
A1: No. For example, $Q$ can be set as $1_{n×m}\notin U(a,b)$, and then the RE-OT problem degenerates into Entropic OT. However, the resulting $\tilde{Q}$ computed based on $Q$ belongs to the polytope $U(a,b)$. As demonstrated in Prop.1 (2), the use of $Q$ and $\tilde{Q}$ is essentially equivalent.
>***Q2: In the experiments, how the parameter $\epsilon$ was chosen?***
A2:We simply set $\epsilon=1$. Here $\epsilon$ is exactly equivalent to the temperature $\tau$ in the Softmax function. In our paper, we did not modify the temperature and followed the setting of Vanilla Softmax, where we simply set $\epsilon=1$.
>***Q3: During training, the constant c is it fixed? How can guarantee the positiveness of $C_{ij}$***
A3: $c$ is fixed but is independent of training. Here, $c$ refers to a sufficiently large number that exists only in theory. We can easily observe that $C_{ij}=c-l_{ij}>0$ when $c$ is sufficiently large, ensuring the positivity of $C_{ij}$. However, in practical computations, when performing row normalization, we can obtain $e^{-(c-l_{ij})}/\sum_k e^{-(c-l_{ik})}=e^{l_{ij}}/\sum_k e^{l_{ik}}$. The results are independent of $c$.
>***Q4: In L554, the last approximation is obtained by a Taylor expansion?***
A4: Yes, the last approximation is obtained through Taylor expansion as $\epsilon$ approaches 0 from the positive side. This derivation is inspired by Eq. 6 in [1]. We will clarify it in the new version.
>***Q5: The Inverse RE-OT approach is limited since one has to have a good knowledge of the supervision $\tilde{P}$***
A5: Yes, one has to know $\tilde{P}$ for Inverse RE-OT; otherwise, we cannot optimize Inverse RE-OT for representation learning. However, for a classification task, $\tilde{P}$ is assumed to be known as the labels of samples.
>***Q6:The RE-OT is strongly dependent on the prior Q, that can be an additive hyperparameter.***
A6: Yes, RE-OT is dependent on the prior $Q$ which can be an additive hyperparameter. However, we try to address your concern about the application of OT and (long-tailed) classification fields:
(1) For OT, $Q$ can be **computed** based on practical needs, instead of artificially adding a hyperparameter. For instance, as shown in Eq. 10, by setting $Q$ in a specific way, we can deblur the barycenter of images, which is not achievable with traditional Entropic OT.
(2) For (long-tailed) classification, the optimal $Q$ is indeed unknown, and the known label ratios serve as a surrogate choice. However, we believe that one day, we will be able to discover a more suitable $Q$ to enhance representation learning. In fact, we are currently adopting a new $Q$, which is computed using a vanilla softmax-trained "teacher" model. We present the results in the uploaded PDF in Tab. 3 and it is interesting to find a great improvement for the head classes, which inspires us to further explore the choice of $Q$.
>***Q7: About Typos.***
A7: Thank you for your feedback. We will make corrections in the final version.
[1] F. Wang and H. Liu. Understanding the behavior of contrastive loss. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 2495–2504, 2021
---
Rebuttal Comment 1.1:
Title: Thank you for the response
Comment: I thank the authors for their efforts in the rebuttal. | null | null | null | null | null | null |
FOCAL: Contrastive Learning for Multimodal Time-Series Sensing Signals in Factorized Orthogonal Latent Space | Accept (poster) | Summary: This paper proposes a self-supervised contrastive learning framework for extracting comprehensive features from multimodal time-series sensing signals. The framework refines contrastive learning by modeling shared and private features and exploiting orthogonality constraints. Moreover, a Temporal Structural Constraint is proposed to make up for the locality of processing the temporal information locality appropriately in contrastive learning. Finally, to demonstrate the effectiveness of the proposed FOCAL, the authors conduct extensive experiments on multiple multimodal sensing datasets.
Strengths: 1. Extensive experiments are conducted on four benchmarks: MOD, ACIDS, RealWorld-HAR and PAMAP.
2. This work is relatively novel and technically sound.
3. The description and explanation of motivation are clear.
4. It seems to be a pioneer work to consider and solve temporal information locality.
5. The results are promising.
Weaknesses: 1. In the experiments, what is the intuition of the hyperparameter setting of Eq.5? Sensitivity analysis appears to be lacking.
2. Whether the proposed temporal structural constraint is also applicable to other comparative learning frameworks to solve the problem of temporal information locality.
3. From the ablation results, the private space seems to be important for contrastive learning, somewhat similar to intra-modal contrastive loss. This work is based on strong orthogonality constraints, but in fact, the performance of the FOCAL-noOrth setting has not dropped too much. I am worried whether the orthogonality assumption is too strong.
4. Eq.4 is essentially a triplet contrastive loss, are there possible alternatives? Moreover, the value of the “margin ” does not seem to be mentioned in the paper.
5. I think Figure 6 should be compared with the visualization results of the best baseline (or some variants of FOCAL), so as to reflect the differences or superiority. like Supervised vs. MoCo vs. FOCAL? Otherwise, it feels like visualization for visualization's sake.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: See Weaknesses.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The authors provide a comprehensive discussion about limitations and potential extensions in the appendix.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: # ****Response to Reviewer 8mBK****
********************Q1:******************** In the experiments, what is the intuition of the hyperparameter setting of Eq.5? Sensitivity analysis appears to be lacking.
****************Response****************: We have added the sensitivity test and plotted the performance of FOCAL against different hyperparameters in Figure 12 in the attached pdf:, and we observe that FOCAL is generally robust against the hyperparameter selections, with less than 2% accuracy fluctuations in all cases. For this reason, we did not perform a comprehensive hyperparameter search in our experiment.
****************Q2****************: Whether the proposed temporal structural constraint is also applicable to other comparative learning frameworks to solve the problem of temporal information locality.
****************Response****************: As in response to Reviewer zwVA, we applied the proposed temporal constraint to multiple contrastive learning baselines (i.e., SimCLR, MoCo, CMC, Cocoa, and GMC). Table 15 and Table 16 summarize the results on ACIDS and PAMAP2, and we observed up to 18.99% improvement on ACIDS and up to 8.39% improvement on PAMAP2 in accuracy. It validates that the temporal constraint can be used as a plugin to enhance existing contrastive learning frameworks for time-series data.
**Table 15: Benefits of Temporal Constraints to SOTA baselines on ACIDS**
| | SimCLR | | MoCo | | CMC | | Cocoa | | GMC | |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| | Acc | F1 | Acc | F1 | Acc | F1 | Acc | F1 | Acc | F1 |
| wTemp | **0.7461** | **0.6938** | **0.7836** | **0.6618** | **0.8690** | 0.7090 | **0.8543** | **0.7665** | **0.9347** | **0.8109** |
| Vanilla | 0.7438 | 0.6101 | 0.7717 | 0.6205 | 0.8443 | **0.7244** | 0.6644 | 0.5359 | 0.9096 | 0.7929 |
**Table 16: Benefits of Temporal Constraints to SOTA baselines on PAMAP2**
| | SimCLR | | MoCo | | CMC | | Cocoa | | GMC | |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| | Acc | F1 | Acc | F1 | Acc | F1 | Acc | F1 | Acc | F1 |
| wTemp | **0.7129** | **0.6884** | **0.7800** | **0.7602** | 0.7804 | 0.7583 | **0.8442** | **0.8146** | **0.8253** | **0.8114** |
| Vanilla | 0.6802 | 0.6583 | 0.7559 | 0.7387 | **0.7906** | **0.7706** | 0.7603 | 0.7187 | 0.8119 | 0.7860 |
****************Q3****************: This work is based on strong orthogonality constraints, but in fact, the performance of the FOCAL-noOrth setting has not dropped too much. I am worried whether the orthogonality assumption is too strong.
****************Response****************: Our interpretation is the introduction of the private space brings the most performance improvement (FOCAL vs. FOCAL-noPrivate), while the orthogonality constraint further enhances the performance by pushing the two spaces to exploit unrelated semantics. As can be seen from our ablation study, FOCAL-noOrth improves significantly over FOCAL-noPrivate, while FOCAL further improves over FOCAL-noOrth, which means the orthogonality constraint contributes positively to FOCAL. Alternatively, when we replace the geometrical orthogonality with the distributional independence, FOCAL-wDistInd degrades significantly compared to FOCAL-noOrth, which means the distributional independence contributes negatively to the framework.
****Q4****: Eq.4 is essentially a triplet contrastive loss, are there possible alternatives? Moreover, the value of the “margin ” does not seem to be mentioned in the paper.
****************Response****************: We conducted a sensitivity test on the margin value in the temporal triplet loss and plotted the performance of FOCAL against different margin values in Figure 12(d) in the attached pdf. FOCAL's accuracy changes from 95.11% to 94.89% on ACIDS and from 82.22% to 84.22% on PAMAP2, when the margin varies from 0.1 to 3. Setting the margin as 1 is overall the best strategy. We also conclude that FOCAL is not very sensitive to the temporal constraint margin selection.
****************Q5****************: Figure 6 should be compared with the visualization results of the best baseline (or some variants of FOCAL), so as to reflect the differences or superiority. like Supervised vs. MoCo vs. FOCAL?
****************Response****************: We provided the t-SNE visualization comparison between FOCAL and several multi-modal contrastive learning frameworks (i.e., CMC, GMC, and Cocoa). Figure 10 and Figure 11 in the attached pdf visualize their concatenated modality features with DeepSense and SW-T encoder, respectively, on the MOD dataset as an example. The results show that FOCAL achieved better separation among different classes, which is aligned with our quantitative clustering evaluation results.
---
Rebuttal 2:
Comment: Dear Reviewer 8mBK,
Thanks again for your valuable feedback.
We have carefully considered all your comments and have tried our best to improve the paper. We have added a sensitivity test for each hyper-parameter showing the robustness of FOCAL against these hyper-parameters, t-SNE figures demonstrating a better separation among different classes for FOCAL, and additional experiments with improvements up to 18.99% after applying temporal constraints to multiple contrastive learning baselines.
As we are reaching the end of the author-reviewer discussion period, we wonder if our responses have sufficiently addressed your concerns, and we are happy to address any questions you may have.
Thanks again for your effort in helping us improve the paper!
Best,
Submission #3273 Authors
---
Rebuttal Comment 2.1:
Comment: Thank you for your kind response. The authors have addressed some of my concerns. I'd like to raise my rating. | Summary: This paper proposes a multimodal contrastive learning framework (FOCAL) for extracting comprehensive features from multimodal time-series sensing signals through self-supervised training. In which, FOCAL first decouples each modality into two subspaces, i.e., shared and private spaces, and uses simple soft orthogonal loss as an objective. Then, the temporal structural constraint for modality features is proposed to enhance feature temporally. The authors use four multimodal sensing datasets with two backbone encoders and two classifiers to conduct many experiments.
Strengths: 1. The motivation and contribution of the paper are clearly described.
2. Many experiments are presented in the manuscript.
Weaknesses: 1. The biggest concern is the effectiveness of multimodal feature factorization. From the line 161, this work uses the simple soft orthogonal constraint to decouple or factorize each modality feature. However, the single orthogonal constraint is too simple and may not decouple multimodal features well. In particular, the information is easily leaked between modalities, and many prior works have tried to solve this problem such as [1-3]. In addition, there is no visualization result for both subspaces in the experimental part, so it is difficult to determine whether the factorization is effectiveness.
[1] Li Y, Wang Y, Cui Z. Decoupled Multimodal Distilling for Emotion Recognition[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2023: 6631-6640.
[2] Ouyang J, Adeli E, Pohl K M, et al. Representation disentanglement for multi-modal brain MRI analysis[C]//Information Processing in Medical Imaging: 27th International Conference, IPMI 2021, Virtual Event, June 28–June 30, 2021, Proceedings 27. Springer International Publishing, 2021: 321-333.
[3] Yang D, Huang S, Kuang H, et al. Disentangled representation learning for multimodal emotion recognition[C]//Proceedings of the 30th ACM International Conference on Multimedia. 2022: 1642-1651.
2. In the line 29, the word “ignore” may not be as appropriate. For example, the CLIP contrastive frameworks does not emphasize shared or private spaces, but we cannot assume that this work ignores heterogeneity, only that they do not explicitly consider. Therefore, it might be better to replace "ignore" with "explicitly consider".
3. In the experiments, why not consider some more common multimodal scenarios, such as the MOSI [4] and MOSEI [5] datasets, which contain the three most common modalities in the real world, i.e., Language, Visual and Acoustics, and whose features are packaged as time-series-based, in line with the settings in this paper.
[4] Zadeh A, Zellers R, Pincus E, et al. Mosi: multimodal corpus of sentiment intensity and subjectivity analysis in online opinion videos[J]. arXiv preprint arXiv:1606.06259, 2016.
[5] Zadeh A A B, Liang P P, Poria S, et al. Multimodal language analysis in the wild: Cmu-mosei dataset and interpretable dynamic fusion graph[C]//Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). 2018: 2236-2246.
4. The public datasets lack the necessary references, please check carefully. (Line 199-206)
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: Will the MOD dataset be open-released?
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 2 fair
Limitations: No limitations are provided in this paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: # **Response to Reviewer Q2hY**
**Q1**: The biggest concern is the effectiveness of multimodal feature factorization. From line 161, this work uses the simple soft orthogonal constraint to decouple or factorize each modality feature. However, the single orthogonal constraint is too simple and may not decouple multimodal features well. In particular, information is easily leaked between modalities, and many prior works have tried to solve this problem such as [1-3]. In addition, there is no visualization result for both subspaces in the experimental part, so it is difficult to determine whether the factorization is effective.
**Response** :
We agree with the reviewer that FOCAL shares with reference [1-3] in exploiting both shared information and private information in multi-modal collaboration. However, there are several factors that make FOCAL substantially different from the listed references.
First, regarding the learning paradigm, [1, 2, 3] all work with supervised tasks (classification or reconstruction) where the disentanglement objectives are used as augmentations to original task objectives. However, FOCAL works with self-supervised contrastive learning, where the factorization space should be designed without knowledge of downstream tasks and the auxiliary objectives should cooperate with the contrastive learning objectives, which are challenges that have not been considered in listed references.
Second, regarding the application scenario, we have positioned the paper as a self-supervised learning framework for multi-model time-series sensing signals where contributions are made from both the multi-modal perspective and time-series perspective. While we think part of the design could be potentially applicable to more diverse application scenarios, the evaluation of language-vision-acoustics task is beyond the scope of this paper and we leave it as our future explorations.
We select the simple orthogonal constraint according to the intuition that it aligns well with the contrastive learning paradigm and the fact that it benefits the downstream task performance.
On the one hand, contrastive learning works in a manner that the angular similarity between embeddings represents semantical proximity, while the geometrical orthogonal constraint is aligned with semantical independence in contrastive learning and penalizes the overlaps between the private space and shared space.
On the other hand, adding the orthogonal constraint benefits the downstream tasks after the contrastive pretraining phase. As we presented in the ablation study, although the alternative distributional independence might create better-disentangled representations, it leads to significant degradation after being integrated with the contrastive objectives.
As for the information leakage concern (i.e., exactly the same information is exploited in both spaces), we believe the proposed orthogonal constraint is a valid solution that prevents leakage while being suitable for contrastive learning, as demonstrated by the end results in our ablation study. The performance improvement of FOCAL-noOrth to FOCAL-noPrivate means the private space task encourages the encoder to exploit more comprehensive semantics, while the improvement of FOCAL to FOCAL-noOrth further validates that the orthogonal constraint encourages the private space to exploit non-overlapped information as the shared space and enriches the learned semantics. We do not expect the private modality representations to be fully separable in the visualization, because the instance discrimination task selected for the private space is not only related to private modality information but can also be related to the shared information across modalities. However, too much reliance on the shared information in the private space is penalized by the posed orthogonal constraints. How to design a unique proxy task that is only related to the private modality information is a challenging task that we want to investigate in the future. The margin loss solution in [1] can not be applied because we do not have downstream category information during the self-supervised pretraining.
**Q2**: In line 29, the word “ignore” may not be as appropriate.
**Response**: We agree with the reviewer and will change “ignore” to “explicitly consider” in our next draft.
**Q3**: In the experiments, why not consider some more common multimodal scenarios, such as the MOSI [4] and MOSEI [5] datasets, which contain the three most common modalities in the real world, i.e., Language, Visual, and Acoustics, and whose features are packaged as time-series-based, in line with the settings in this paper.
**Response**: Thanks for the suggestion. Due to the short duration of the rebuttal period, we were not able to extend our framework to more diverse modalities (i.e., language, vision, and acoustics). We would leave this effort as one of our potential future extensions. Instead, we position FOCAL as a novel contrastive learning framework for multi-model time-series sensing signals in IoT applications and compare it with an extensive set of SOTA baselines for the same setting.
**Q4**: The public datasets lack the necessary references, please check carefully. (Line 199-206)
**Response**: Thanks for pointing out the issue. Although the references are added in the Appendix, we will also add the missing references in the main body of the paper.
**Q5**: Will the MOD dataset be open-released?
**Response**: Yes, we do plan to release the MOD dataset upon the paper's acceptance.
**Q6** No limitations are provided in this paper.
**Response**: We have put the discussion on limitations and future extensions in Appendix G due to the space limit. We will also integrate the comments from the reviewers into more limitation discussions in the next version.
---
Rebuttal Comment 1.1:
Comment: I have read the response carefully, and my question can be addressed basically. I would like to increase my score.
---
Rebuttal 2:
Comment: Dear Reviewer Q2hY,
Thanks again for your valuable feedback.
We have explained that while FOCAL shares similarities with reference [1-3], several factors make FOCAL substantially different from the listed reference. We then clarified the motivation and intuition of choosing the proposed orthogonal constraint over a few alternatives. We further supported our claims through ablation studies that demonstrate the effectiveness of the orthogonal constraints and their compatibility with contrastive learning. We believe that the proposed orthogonal constraint is a valid solution to address the information leakage for multimodal contrastive learning. From the ablation studies, we observed an improvement for FOCAL compared to FOCAL-noOrth, validating that the orthogonal constraint encourages the private space to exploit non-overlapped information as the shared space and enriches the learned semantics.
We have also fixed the wording and promise to release the MOD dataset upon the paper’s acceptance.
Lastly, we would like to kindly note that we position FOCAL as a novel self-supervised learning framework in the multimodal time-series sensing domain. Although we believe some aspects could be potentially applicable to more diverse applications, the evaluation of the language-vision-acoustics task is beyond the scope of this paper, and we leave it as our future explorations.
As we are reaching the end of the author-reviewer discussion period, we wonder if our responses have sufficiently addressed your concerns, and we are happy to address any questions you may have.
Thanks again for your effort in helping us improve the paper!
Best,
Submission #3273 Authors | Summary: The paper proposes FOCAL, a contrastive learning method for multimodal time-series signals. The main idea is to first encode each modality into a factorized orthogonal space, and design four pretraining objectives that enforce modality consistency, transformation consistency, orthogonality constraint and temporal locality constraint. Experimental results demonstrate the efficacy of FOCAL.
Strengths: + The two motivations for designing the contrastive objectives make sense to me --- modality-private features can contribute in contrastive learning, and the temporal structural constraint also seems valid.
+ The authors conduct an extensive evaluation of FOCAL --- it is compared with many contrastive learning baselines and evaluated on four multimodal datasets. The improvement achieved by FOCAL is large.
+ The ablation study is great and provides a good analysis on the four pretraining objectives.
+ The paper is clearly presented and easy to follow.
Weaknesses: While I generally agree with the motivations and the proposed four objectives, Eq (5) is a combination of four objectives, and it seems hard to balance the four terms. I wonder if some of the training objectives may compete during training, and how do the authors set the three hyper-parameters $\lambda_p$, $\lambda_o$ and $\lambda_t$? Does it require some manual hyper-parameter tuning? Jointly optimizing the four objectives does not look like an easy task.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: See weaknesses.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Yes the limitations have been well discussed in Appendix.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: # ****Response to Reviewer oGNZ****
****************Q1****************: I wonder if some of the training objectives may compete during training, and how do the authors set the three hyper-parameters $\lambda_p$, $\lambda_o$, and $\lambda_t$? Does it require some manual hyperparameter tuning? Jointly optimizing the four objectives does not look like an easy task.
**Response**: Our conclusion is that competition between learning objectives does not happen in FOCAL, for the following reasons. First, without the private space task, the performance of FOCAL-noPrivate is much lower than FOCAL-noOrth (note noPrivate also implicitly means no orthogonality constraint), thus the private contrastive loss contributes positively to FOCAL. Second, for the orthogonality constraint and temporal constraint, FOCAL works better than FOCAL-noOrth and FOCAL-noTemp variants respectively, thus our orthogonal constraint and temporal constraint both contribute positively. Instead, we observed that FOCAL-wDistInd works worse than FOCAL-noOrth, while FOCAL-wTempCon works worse than FOCAL-noTemp, which means the competition happens between these alternative constraints and the original contrastive objectives. As for the hyper-parameters, they are mostly shared across different datasets and backbone encoders. We set $\lambda_p$ as 1 and $\lambda_o$ as 3 by default and only tune $\lambda_t$ manually when needed. As mentioned in our general response, we have added the sensitivity test results in the attached pdf to demonstrate the resiliency of FOCAL against these hyperparameter values.
---
Rebuttal Comment 1.1:
Title: Thanks for the rebuttal!
Comment: The provided responses have addressed my concerns. | Summary: The paper introduces FOCAL, a new contrastive learning framework for extracting comprehensive features from multimodal time-series sensing signals through self-supervised training. Unlike previous frameworks that focused solely on shared information between sensory modalities, FOCAL also factors in exclusive modality information crucial for understanding physical semantics. It also properly handles temporal information locality, which previous time series contrastive frameworks have not done. FOCAL creates a factorized latent space of shared and private (modality-exclusive) features from each modality and applies a temporal structural constraint on these features. Through extensive testing on four multimodal sensing datasets, FOCAL consistently outperforms the state-of-the-art baselines in downstream tasks, demonstrating its superior performance.
Strengths: - This paper is excellently written, with clear logic and motivation that is easy to understand.
- The results presented are impressive and Figure 1 does a fantastic job in aiding the understanding of the paper.
- The idea of modeling modality-shared and specific information is very great.
Great work!
Weaknesses: I couldn't find any significant weaknesses in this paper. However, I should note that I'm not an expert in this topic. While I am familiar with multimodality, and contrastive, my understanding of time series analysis is limited
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: - Is there any hyperparameter search performed for λp, λo, and λt which control the weights of each loss component? Additionally, was there any hyperparameter search carried out for other methods?"
- "Why does the FOCAL-wDistInd perform worse than the noOrth model? Does the replacement of orthogonality with distributional independence contribute to this difference?
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations: NA
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: # ****Response to Reviewer WTA9****
****************Q1****************: Is there any hyperparameter search performed for $\lambda_p$, $\lambda_o$, and $\lambda_t$ which control the weights of each loss component? Additionally, was there any hyperparameter search carried out for other methods?
****************Response****************: We did not perform a comprehensive hyperparameter search for the loss weights in FOCAL, which we agree might slightly improve our performance. Instead, we manually tune the hyperparameters on one dataset and use the same hyperparameters across the remaining datasets, with minor changes (i.e., on $\lambda_t$). The same tuning strategy is also carried out for all baseline methods. As mentioned in our general response, we have added the sensitivity test results in Figure 12 in the attached pdf to demonstrate the resiliency of FOCAL against these hyperparameter values.
****************Q2****************: Why does the FOCAL-wDistInd perform worse than the noOrth model? Does the replacement of orthogonality with distributional independence contribute to this difference?
****Response****: As we observed in the experiment, the distributional independence constraint leads to significant performance degradation in most cases, even worse than the FOCAL-noOrth, especially when SwinTransformer is used as the backbone encoder. Our interpretation is two-fold: First, the dynamic nature of the attention mechanism within SwinTransformer makes the distribution of encoded latent features more complicated and hard to discriminate with a simple classifier; Second, the iterative training between the distribution discriminator and the contrastive objectives does not work coherently as with the autoencoder training. It was easy to collapse in our experiments. We leave the effective integration of distributional disentanglement and contrastive learning framework as a future exploration direction.
---
Rebuttal Comment 1.1:
Title: Thanks for reply
Comment: I don't have other questions. | Rebuttal 1:
Rebuttal: # General Responses
We would like to sincerely thank all the reviewers for their valuable feedback and constructive suggestions for this submission. As a summary of our responses, we have finished the following tasks during the rebuttal period:
1. We added a sensitivity test of loss hyperparameters on ACIDS and PAMAP2 datasets, including three loss weight terms ($\lambda_p$, $\lambda_o$, $\lambda_t$) and the temporal constraint margin. We present the accuracy of FOCAL with different hyperparameters in Figure 12, and the results show that FOCAL is generally resilient against these hyperparameters. Therefore, the hyperparameter values are mostly shared across different datasets and backbone encoders, with manual tuning only on temporal constraint weight $\lambda_t$. In addition, we set the private loss weight $\lambda_p$ as $1$, the orthogonal loss weight $\lambda_o$ as $3$, and the temporal loss margin as $1$ by default.
2. As suggested by Reviewer zwVA and Reviewer 8mBK, we applied the proposed temporal constraint to multiple baselines (i.e., SimCLR, MoCo, CMC, Cocoa, and GMC), and achieved noticeable performance improvement compared to their vanilla versions. The full results on ACIDS and PAMAP2 datasets are summarized in Table 15 and Table 16. The inspiring results demonstrated the effectiveness of the proposed temporal constraint in general contrastive learning frameworks for time-series data.
3. We provided the visualization comparison between FOCAL and several multi-modal contrastive baselines in Figure 10 and Figure 11, which showed that FOCAL achieved better separation among different downstream classes after the pretraining.
4. Due to the space limit, we have put the discussion of technical limitations and potential extensions in Appendix G, and we will accordingly expand this list by integrating the comments of all reviewers.
5. We promise to fix all presentation and terminology issues pointed out by Reviewer Q2hY.
**We have included these tables (Table 16-17) and figures (Figure 10-12) in the attached pdf to address all key concerns and to clarify our work.**
Pdf: /pdf/76f6a9b2b8b98e209e72e484730b945800435071.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: The proposed FOCAL is a constrasitive learning method or time-series signals. The major contributions are:
1. For multi-modality samples, FOCAL learns shared feature that are similar between modalities, and a private features that are similar intra-modality but different across modalities.
2. The shared and private features are orthoigonal, so that they focus on different aspects.
3. The Temporal Locality Constraint which restricts the average distance within a short sequence to be less than the average distance between two random sequences. The "average" operation allows some samples to be similar even though they are from distant samples in time-domain.
Extensive experiments are conducted with 4 datasets and 12 baselines, which gives consistent SOTA performance on classification and clustering tasks.
Strengths: 1. The idea of private feature is novel, and effective according to the ablation study.
2. Although the temporal structural constraint is a simple modification, it provides noticable performance increase.
3. The proposed method is simple and easy to follow. Detailed network design, data processing, training configurations are provided in Appendix.
Weaknesses: 1. The proposed contribution is mostly the loss functions, e.g., private and shared feature constrative loss, temporal locality loss, etc. I believe these loss can be applied to existing methods with simple modification. It would be very strong to show that the proposed loss serves as a plugin that enhance other contrastive learning methods.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: See weakness.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The paper discussed little on the techinical limitation and future directions. It would be great the discuss more on failure cases.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: # ****Response to Reviewer zwVA****
**Q 1**: It would be very strong to show that the proposed loss serves as a plugin that enhances other contrastive learning methods.
**Response**: Thanks for the suggestion. We have applied the proposed temporal constraint to multiple contrastive learning baselines (i.e., SimCLR, MoCo, CMC, Cocoa, and GMC). Table 15 and 16 summarize the results on ACIDS and PAMAP2, and we have observed noticeable performance improvement in most cases (up to 18.99% on ACIDS and up to 8.39% on PAMAP2). It validates that the temporal constraint can be used as a plugin to enhance existing contrastive learning frameworks for time-series data.
**Q 2**: The paper discussed little on the technical limitation and future directions. It would be great the discuss more on failure cases.
**Response**: We would like to note that the limitations and potential extensions of this paper are discussed in Appendix G. We will also expand this list by integrating the comments made by all reviewers in the next version.
**Table 15: Benefits of Temporal Constraints to SOTA baselines on ACIDS**
| | SimCLR | | MoCo | | CMC | | Cocoa | | GMC | |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| | Acc | F1 | Acc | F1 | Acc | F1 | Acc | F1 | Acc | F1 |
| wTemp | **0.7461** | **0.6938** | **0.7836** | **0.6618** | **0.8690** | 0.7090 | **0.8543** | **0.7665** | **0.9347** | **0.8109** |
| Vanilla | 0.7438 | 0.6101 | 0.7717 | 0.6205 | 0.8443 | **0.7244** | 0.6644 | 0.5359 | 0.9096 | 0.7929 |
**Table 16: Benefits of Temporal Constraints to SOTA baselines on PAMAP2**
| | SimCLR | | MoCo | | CMC | | Cocoa | | GMC | |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| | Acc | F1 | Acc | F1 | Acc | F1 | Acc | F1 | Acc | F1 |
| wTemp | **0.7129** | **0.6884** | **0.7800** | **0.7602** | 0.7804 | 0.7583 | **0.8442** | **0.8146** | **0.8253** | **0.8114** |
| Vanilla | 0.6802 | 0.6583 | 0.7559 | 0.7387 | **0.7906** | **0.7706** | 0.7603 | 0.7187 | 0.8119 | 0.7860 |
---
Rebuttal Comment 1.1:
Title: Additional experiment on private and shared feature constrative loss
Comment: The benefit of temporal constraint is proven by Table 15 and 16. However, the major contribution of the paper is the shared and private loss, and the Orthogonality loss. I would suggest similar experiments to be done, i.e., enhance baselines with the proposed private and shared feature constrative loss, Orthogonality loss.
---
Reply to Comment 1.1.1:
Title: Additional results on benefits of private and shared feature contrastive loss
Comment: Thank you for the suggestion! We have conducted additional experiments to assess the performance of selected baselines with our proposed factorized contrastive and orthogonal loss. The tables below summarize the results on ACIDS and PAMAP2 datasets using DeepSense as the backbone model. For these experiments, we would like to preserve the original design of baseline approaches while integrating our proposed loss objectives. Most baselines either target primarily on instance discrimination (e.g., SimCLR, MoCo) or have designs that do not align well with subspace factorizations (e.g., Cocoa, Cosmo). We eventually selected CMC, GMC, and TS-TCC as the main subjects of the experiments.
For CMC, as introduced in our submission, the CMC loss is applied to the shared modality embeddings, while the instance discrimination loss is applied to each private modality space. Besides, orthogonality constraints are applied between shared-private and private-private modality embeddings.
For GMC, we randomly generate two augmented views. For each view, we apply GMC’s original loss objective to the **shared space embeddings** across the modalities. Then we measure the NT-Xent loss on the private embeddings of the two views for private loss. Lastly, we added the orthogonal loss between the factorized subspaces.
FOR TS-TCC, we consider TS-TCC’s original loss objective as the **private loss** since it aims to learn temporal representation within a single modality. To measure the shared loss, we calculate the NT-Xent loss by contrasting the shared space embedding of each modality pair. Lastly, we added the orthogonal loss between the factorized subspaces.
We can see from the tables that introducing our proposed loss methods has improved the performance of the baselines (relatively) by **up to 12.00% in ACIDS and 5.83% in PAMAP2**. This demonstrates the effectiveness of the factorized contrastive orthogonal loss as an enhancement to the existing contrastive learning frameworks.
Please let us know if you have any further concerns or comments.
**Table 17: Benefits of Factorized Contrastive Orthogonal Constraints to SOTA baselines on ACIDS**
| | CMC | | GMC | | TS-TCC | |
| --- | --- | --- | --- | --- | --- | --- |
| | Acc | F1 | Acc | F1 | Acc | F1 |
| wOrth | **0.9456** | **0.8014** | **0.9343** | **0.8174** | **0.8032** | **0.7093** |
| Vanilla | 0.8443 | 0.7244 | 0.9096 | 0.7929 | 0.7667 | 0.6164 |
**Table 18: Benefits of Factorized Contrastive Orthogonal Constraints to SOTA baselines on PAMAP2**
| | CMC | | GMC | | TS-TCC | |
| --- | --- | --- | --- | --- | --- | --- |
| | Acc | F1 | Acc | F1 | Acc | F1 |
| wOrth | **0.8367** | **0.8255** | **0.8166** | **0.7892** | **0.7863** | **0.7484** |
| Vanilla | 0.7906 | 0.7706 | 0.8119 | 0.7860 | 0.7772 | 0.7246 | | null | null | null | null | null | null |
BIRD: Generalizable Backdoor Detection and Removal for Deep Reinforcement Learning | Accept (poster) | Summary: The paper addresses the challenge of detecting backdoored reinforcement learning policies. The injection of backdoors in RL policies was first studied in [19] and while there has been some work on detection of backdoored policies in [2] and [14] - these methods are limited to settings where the trigger is in the competing agent's actions [14] or limited to perturbation patches [2]. This paper formulates trigger restoration as an optimization problem and designs a novel metric to detect backdoored policies.
Strengths: The paper attempts to solve an important problem on supply chain of machine learning models - detection of backdoored policies and elimination of triggers.
Weaknesses: * The formulation has one serious weakness - it assumes that we have access to value function in addition to the agent's policy network and the clean environment. The value function is "poisoned" and so, just optimizing against this value function allows one to discover triggers. This is unrealistic. In practice, the RL policies in the supply chain would be procured as a state to action map or sequence of state to a action - that is, one would just have access to the policy and not have access to the value function.
The entire approach is predicated on this availability of the value function because the trigger recover is framed as an optimization problem that maximizes the value function. So, this is the primary concern of the reviewer.
* Some statements are inaccurate though they are not serious concerns and are more of presentation issue. For example, line 108/109, "TO inject the backdoor, the attacker needs to manipulate .. reward function .. " ... This is just one way to inject Trojans - for supervised learning, one can inject Trojans by directly manipulating the network and the same could be feasible for RL. So, it would be better to say, "existing methods manipulate reward function for injecting backdoor.."
* The method for trojan elimination uses a number of hyperparams which appear to be critical for success of the method - definition of high activation for a neuron, selection of L top neurons, threshold epsilon_1, etc. In its current presentation, the approach looks rather hacky that might be made to work on 9 scenarios but will be difficult to generalize.
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: * Can you help understand a practical setting where one would necessarily have access to policy as well as the value function for the policy ? Isn't this a very major assumption and severely limits the use of the proposed method? It also makes the problem much more easier to solve. Even if a buyer forces that policy be accompanied by value function -the attacker could produce value function with the trigger removed. Why would an attacker provide a value function that has the trigger behavior embedded in it?
* The part of different triggers having common shortcut is very interesting. But Fig 1 is not clear. What does "high activation" mean here? Neurons can have very different scales of activation, particularly across different layers - how do you decide that some activation is high and compare different neurons with each other? What does it mean to reinitialize weights for the highest value neurons - how does it correspond to breaking shortcuts (traditional definition of shortcut is that low layer neurons impacting very high layer neurons).
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 2 fair
Limitations: There are no concerns of negative broader impact.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their constructive review. Please see below for our response and clarifications.
**The reviewer first questioned the assumption of accessing the target agent’s value function.**
We thank the reviewer for the valuable feedback. We would like to kindly point out that this assumption can be practical in many cases. Our assumption simulates a practical scenario that the agent user outsources the policy training process to a third party (model training service provider). The user can request to get both the policy network and value function from the service provider or only choose the provider that is willing to provide the value function. As for service providers, it does not introduce the extra cost of providing the value function to their customers as they will need to train it anyway. For example, we checked the top 10 models that are most popular in **Hugging Face**, and we found that all of them included the value network.
We also agree with the reviewer that the attacker could intentionally attach a benign value network instead of the backdoored one to make the attack more stealthy, or we do not have access to the value network. We address this scenario and provide a corresponding solution in Supplement S3.4. In such cases, we restore the trigger by minimizing the actual return of the agent (i.e., change the objective function in Eqn. (1) to
$\min_\Delta \sum_s\rho^{\pi}(s)\sum_{a}\pi(s+\Delta)R(s+\Delta, \pi(s+\Delta)))$,
where $R$ is the actual reward function of the RL problem. The insight is for a backdoored agent, its actual return will drop when facing the triggered environment.
Fig. S4 in Supplement shows that our method can still outperform all baselines in backdoor detection. Our removal step is not affected as Eqn. (4) does not require the agent’s value function. It optimizes the agent’s total return under the actual reward. Table 4 in the submitted pdf file shows that with the trigger restored using this adaptive method, our backdoor removal is still effective (We select five setups from the nine attack scenarios). We will highlight this in the next version.
**The reviewer then questioned the sensitivity of our method to key hyper-parameters.**
Thank the reviewer for the valuable feedback. In Supplement S4, we systematically evaluate the sensitivity of BIRD to hyper-parameter changes, including the one pointed out by the reviewer – the number of neurons/convolutional kernels selected in each layer for resetting ($L$) in Table S8. The results indicate that our approach is insensitive to the variations in these hyper-parameters.
In addition, we conduct an additional experiment to evaluate the $\epsilon_1$ in Eqn.(4). Table 5 in the submitted pdf file shows that our method is robust against the variation of this hyper-parameter. We will add this result in the next version.
**The reviewer then asked for clarifications about our neuron reinitialization process and its connection to breaking shortcuts.**
Thank you for pointing this out. Sorry for the confusion.
We notice that the activation values of neurons at different layers are at different scales. As such, we sort neurons based on their activation value within each layer. Our policy networks involve two types of layers – the convolutional layer and the fully-connected layer. More specifically, as shown in Algorithm 3 in Supplement S1, during the backdoor removal, we first run a warm-up stage to collect a set of $E$ trajectories from the poisoned environment. For each state in these trajectories, we input it into the policy network and record the activation value for each kernel in the convolutional layer and each neuron in the feedforward layer. We then compute the activation mean of each kernel/neuron across all the states.
Given these mean activation values, we then select $L$ neurons/kernels from each layer in the policy network. Specifically, for each convolution layer, the output for each state is a 3-D tensor with the size of $[C, H, W]$. We first find the highest activation value in each channel (i.e., each $H\times W$ 2-D activation matrix). Then, we rank these channel-wise highest activation values to select the top $L$ channels with the $L$ highest values. We reinitialize the weights and biases of the kernels corresponding to the selected channels. For each linear layer, we rank the activation value and select the top $L$ neurons. We reinitialize the weight and bias of these selected neurons. Regarding the reinitialization operation, we reset the weights and biases as zero. We are again sorry for the confusion due to these missing details. We will add them together with a more detailed description of Algorithm 3 in the next version.
We came up with this idea following [1][2]. As discussed in [1], when the trigger is presented in the input, the neurons/kernels with high activation values at each layer typically form the backdoor shortcuts. Resetting their weights and biases as zero can potentially remove the shortcuts and give the model flexibility in learning the correct actions under the poisoned inputs. In our method, we follow this idea to remove backdoor shortcuts and give the policy the opportunity to learn correct policies in the poisoned environment.
[1] Training with More Confidence: Mitigating Injected and Natural Backdoors During Training, NeurIPS 2022.
[2] Adversarial Neuron Pruning Purifies Backdoored Deep Models, NeurIPS 2021.
---
Rebuttal Comment 1.1:
Title: Thank you for the supplementary results
Comment: "We also agree with the reviewer that the attacker could intentionally attach a benign value network instead of the backdoored one to make the attack more stealthy, or we do not have access to the value network. We address this scenario and provide a corresponding solution in Supplement S3.4. "
Yes, this answers my concern. The authors might want to expand the scope of the presentation, and not make it appear too reliant on value network access.
"Thank the reviewer for the valuable feedback. In Supplement S4, we systematically evaluate the sensitivity of BIRD to hyper-parameter changes, including the one pointed out by the reviewer – the number of neurons/convolutional kernels selected in each layer for resetting (
) in Table S8. The results indicate that our approach is insensitive to the variations in these hyper-parameters."
Thank you. This addresses my second major concern.
I will raise my score to be positive.
---
Reply to Comment 1.1.1:
Comment: We do appreciate the reviewer's thoughtful review of our paper and the positive feedback.
We are pleased to hear that our solution in Supplement S3.4 addresses the reviewer's concern regarding the assumption of the value network. And we will certainly work on enhancing this aspect to provide a more comprehensive view of our approach.
Moreover, we are grateful that our evaluation of hyper-parameter sensitivity addresses the reviewer's second major concern.
We would like to thank the reviewer for increasing the score. As we proceed with the revision, we will be mindful of the suggestions and present a stronger version of our paper based on the reviewer's feedback. | Summary: This paper addresses the threat of backdoor attacks against deep reinforcement learning (DRL) policies. To tackle this problem, the authors propose BIRD, a novel generalizable backdoor detection and removal method for pretrained DRL policies in a clean environment without any knowledge of the attack specifications or access to the training process. They formulate trigger restoration as an optimization problem and introduce a novel metric for detecting backdoored policies. The authors also develop a fine-tuning method to remove the detected backdoor. The environmental results demonstrate the effectiveness and computational efficiency of BIRD, as well as its robustness against various backdoor attacks.
Strengths: 1. Originality: BIRD, is the first approach to detect and remove backdoors from a pre-trained DRL policy without requiring any prior knowledge. The authors demonstrate the novelty and advantages of each component of the algorithm, highlighting the strong practical value of the proposed technique in addressing backdoor attacks.
2. Quality and clarity: The paper is well-written and organized, effectively conveying the authors' motivation and the details of the proposed technique. The main contribution lies in the development of a comprehensive defense strategy consisting of trigger restoration, backdoor detection, and removal. The authors provide clear and detailed insights into each part, along with the technical details. The experiments are well-designed, conducted rigorously, and include ablation studies to validate the efficacy of key design choices in BIRD.
3. Significance: By providing an effective defense approach, the authors contribute valuable insights to adversarial RL defense against backdoor attacks. This work opens up possibilities for further research and development in addressing backdoor attacks, making it a notable contribution to the community.
Weaknesses: 1. Based on my understanding of this paper, the authors' approach focuses on the detection and removal of backdoor attacks rather than training a robust policy from scratch to defend against them. BIRD performs fine-tuning to remove the backdoor, but it does not directly improve the robustness of the RL algorithm itself. However, I do not have any issues with the authors' approach; I am simply highlighting this aspect.
2. Regarding the presentation of experimental results, I noticed that the tables can sometimes be confusing. For instance, in Table 2, directly comparing the results of Qbert, COMA, and YSNP may lead to misunderstandings.
3. I am curious to know whether the choice of different backdoor attack methods would also impact the effectiveness of BIRD. Could the authors provide a brief explanation of this question?
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: In summary, I have few additional questions or concerns. As mentioned in the weaknesses section, there were some suggestions for improvement provided. I am willing to revise my assessment after further discussion.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: The paper extensively discusses the future directions of their approach, exploring the potential applications, improvements, and extensions of BIRD. However, it would be beneficial to include more discussions regarding the limitations and practical significance of the proposed method. The potential impact of the work should be better highlighted, offering insights into its practical implications.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their positive and constructive review. Please see below for our response and clarifications.
**The reviewer first questioned the presentation of Table 2.**
We thank the reviewer for the suggestion. We agree with the reviewer that the results of Qbert, COMA, and YSNP can not be compared directly. In Table 2, we show the ablation study results of two key designs in our method: our proposed reward based detection metric and the neuron re-initialization strategy. Comparing Columns 2&3 and Columns 4&5 at each row can reflect the efficiency of these two designs. Due to the space limit, we combine the results of different games into one table. We will add more clarification to avoid the confusion of comparing the results of different games in the next version.
**The reviewer also asked about BIRD’s effectiveness against different attacks.**
We thank the reviewer for this interesting question. We would like to kindly highlight that we already consider three different attacks in our work, each for one type of game (RL setup): single-agent games, multi-agent competitive games, and multi-agent cooperative games. To the best of our knowledge, there are no other existing backdoor attacks against multi-agent RL. As such, we found a more recent attack against the single-agent RL [1]. This attack considers a similar attack setup as TrojDRL (the one we evaluated) but with a different method of injecting backdoor.
We conduct an extra experiment on the Atari-Breakout and Seaquest game. We train $5$ backdoored agents with this attack and mix them with another $5$ clean agents respectively for each game. We apply BIRD for backdoor detection and removal. The results shown in Table 3 in the submitted pdf file demonstrate the effectiveness of our method against this attack. We will add this experiment to the next version.
[1] Agent Manipulator: Stealthy Strategy Attacks on Deep Reinforcement Learning. Applied Intelligence 2022.
**Finally, the reviewer pointed out that BIRD performs fine-tuning to remove the backdoor, but it does not directly improve the robustness of the RL algorithm itself.**
We thank the reviewer for pointing this out. We will emphasize in the paper that our goal is to robustify a pre-trained agent against backdoor attacks (i.e., identify backdoored agents and remove the backdoor). We do not consider the setup that aims to improve the robustness of the policy training process to reduce the risk of being backdoored.
---
Rebuttal Comment 1.1:
Title: Follow up with the reviewer
Comment: Thanks the Reviewer jZ8t again for the insightful comments. Since the discussion phase is about to end, we are writing to kindly ask if the reviewer has any additional comments regarding our response. We are at their disposal for any further questions. In addition, if our response and additional experiments address the reviewer's concern, we would like to kindly ask if the reviewer could reconsider their score.
---
Rebuttal Comment 1.2:
Comment: Thank you for your response, which addresses some of my concerns. The additional experiments also add more evidence to showcase the effectiveness. After careful consideration, I decided to keep my score.
---
Reply to Comment 1.2.1:
Comment: Thank the reviewer for the kindly reply! We are happy that our response could help further demonstrate the effectiveness of our method. We will add the changes to the next version and follow the review's suggestion to improve Table 2. | Summary: This paper studies the backdoor defense problem in deep reinforcement learning (DRL) policies. Specifically, the authors proposed the BIRD method to address the limited generalizability and scalability of current practices. By analyzing the unique properties and behaviors of backdoor attacks, the authors formulated trigger restoration as an optimization problem and design a metric to detect backdoored policies.
In addition, a finetuning method is also presented to remove the backdoor, while maintaining the agent’s performance in a clean environment. Experiments are conducted over ten different single-agent or multi-agent environments.
Strengths: - The NeurIPS community finds the topic of backdoor threats in Reinforcement Learning highly relevant.
- The paper is well-written and easy to follow.
- The empirical evaluation seems to be comprehensive.
Weaknesses: - Can you please also report the performances in other popular metrics, e.g., ROC, in addition to the F1 score?
- Can you create some specified baselines tailored to the RL setup? Since I think it might be unfair/inappropriate to apply those defenses for classification models to RL setup.
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: Please see my comments above
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 3 good
Contribution: 3 good
Limitations: Please see my comments above
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their positive and constructive review. Please see below for our response and clarifications.
**First, the reviewer asked for an additional ROC curve for the results in Fig. 2.**
Thank the reviewer for the valuable comment. We follow this suggestion and draw the ROC curve in Fig. 1 in the submitted pdf file. We vary the detection threshold $\epsilon$ from -1 to 1 and plot the ROC curve for BIRD in the selected games. Note that the comparison baselines have a fixed detection threshold. Given that their detection performance is low, we do not draw the ROC curves for NC and Pixel. The results are consistent with Fig. 2 in the main text.
**Second, the reviewer asked for an additional baseline method that is suitable for RL setup.**
Thanks for the comment. We totally agree with the reviewer that adding such a baseline can better demonstrate the effectiveness of our method, as the current baselines are not designed for RL. To conduct such a baseline, we can adapt the current baseline to the RL setup. We consider the pixel method [1], which is stronger than NC. To adapt the pixel to RL, we partially leverage the design of our method. That is, we can still model the trigger restoration as solving Eqn.(1) in Lines 135-136. However, instead of modeling the $\Delta$ as a generative process, we can use the pixel objective to model $\Delta$. That is $\Delta = clip(b_p, 0, 1) - clip(b_n, 0, 1)$, where $b_p$ and $b_n$ represents the positive and negative perturbation added to the state. As such, the final objective function for trigger restoration becomes
$\text{max}\sum_s\rho^{\pi}(s) \sum_{a} \pi(s+\Delta) Q_{\pi} (s+\Delta, \pi(s+\Delta)) $, where
$\Delta = clip(b_p, 0, 1) - clip(b_n, 0, 1)$.
This objective function can be resolved by the REINFORCE method. We denote this trigger restoration method as pixel-RL. After restoring the trigger, we can use two metrics for this method of detection – the original metric of the pixel and our proposed method denoted as pixel-RL-trigger-size and pixel-RL-reward.
We compare our method with these two adapted baseline under six setups: the two single-agent Atari games: Seaquest and Qbert, with targeted and untargeted attacks, QMIX and You-Shall-Not-Pass in multi-agent games. For each environment, we train 5 clean and 5 backdoored agents. Fig. 2 in the submitted pdf file shows the comparisons of detection results. As we can observe from the figure, BIRD still outperforms both methods (pixel-RL-trigger-size and pixel-RL-reward). We will add this experiment to the next version.
[1] Better Trigger Inversion Optimization in Backdoor Scanning, CVPR 2022.
---
Rebuttal Comment 1.1:
Title: Follow up with the reviewer
Comment: Thanks the Reviewer Mdit again for the insightful and positive comments. Since the discussion phase is about to end, we are writing to kindly ask if the reviewer has any additional comments regarding our response. We are at their disposal for any further questions. In addition, if our new experiments address the reviewer's concern, we would like to kindly ask if the reviewer could reconsider their score. | Summary: This paper introduces BIRD (Backdoor Identification and Removal for DRL), a method for detecting and removing triggers in reinforcement learning models. In backdoor attacks, an attacker injects a trigger into the agent's environment during training, leading the agent to take backdoored actions that decrease its actual reward. BIRD addresses the challenge of detecting and removing backdoors from pretrained policies without knowledge of the attack specifications or access to the training process.
BIRD formulates trigger restoration as an optimization problem, identifying the trigger by maximizing the agent's value function. It introduces a novel detection metric based on the actual reward difference before and after adding the restored trigger to the environment. The method effectively detects backdoored agents and employs finetuning with additional regularization terms to remove the backdoor while maintaining performance in the clean environment. Evaluations on various benchmarks demonstrate BIRD's superiority over existing methods, highlighting its generalizability, computational efficiency, and robustness against different attack variations. Overall, BIRD offers an effective solution for detecting and removing triggers in reinforcement learning models, mitigating the vulnerability to backdoor attacks.
Strengths: The paper presents a groundbreaking method for defending against backdoored models in machine learning. The key innovation lies in exploring the total received reward as a means of detection. This unique approach sets it apart from previous works in the field.
The experimental results strongly support the efficacy of the proposed method, showcasing its ability to outperform or at least match the performance of existing techniques. The findings demonstrate that the idea of considering the total reward proves to be highly effective in mitigating the impact of backdoor attacks on machine learning models.
Overall, the paper introduces a novel and promising approach to addressing the backdoor vulnerability in machine learning. The method's success, as evidenced by the experimental results, underscores its potential as a robust defense mechanism against backdoored models.
Weaknesses: However, while the idea of considering the total received reward is novel and promising, there are potential weaknesses that need to be addressed. Firstly, it remains uncertain whether the proposed method's reliance on total reward as a detection mechanism is robust across all scenarios. In particularly noisy or challenging environments, where the model may struggle to learn effectively, it is unclear if the model will consistently receive significantly higher rewards even without the presence of a backdoor.
Additionally, it would be valuable to investigate the potential trade-off between detecting backdoored models and maintaining overall model performance. Since the method focuses on detecting triggers by maximizing the total reward, there is a possibility that it could inadvertently compromise the model's ability to achieve high performance in non-backdoored scenarios. Understanding the impact of this trade-off and ensuring a balance between detection and model performance would strengthen the applicability and practicality of the proposed method.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Does the main idea work efficiently in different scenarios? More experimental results are needed.
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their positive and constructive review. Please see below for our response and clarifications.
**First, the reviewer raised concerns regarding the effectiveness of using reward as the detection metric, particularly for challenging games where even a clean agent may struggle to receive a high reward.**
We thank the reviewer for raising this concern. We would like to kindly emphasize that our metric does not rely on the absolute value of the agent’s reward. Instead, we compute the reward difference before and after adding the restored trigger to the environment. To offset the influence of the absolute value, we normalize the reward difference (i.e., $\phi(\pi, \Delta) = (\bar{\eta}(\pi, \Delta) - \bar{\eta}(\pi)) / \eta_{\text{max}}$ in Line 210). A low metric $\phi(\pi, \Delta)$ means that the agent’s performance drops after observing the trigger, indicating the agent may contain the backdoor.
Regarding a challenging game, we can consider the following two scenarios.
(1) We have a weak agent with the ability to receive only very low rewards. Consequently, it is unlikely to be selected as the attacker's target, as the agent is already quite weak. Attackers typically do not have an interest in attacking such weak agents, as they cannot inflict significant damage and would only result in wasted time and effort.
(2) We have a sub-optimal agent that can still perform in the environment but cannot receive a very high reward due to the task's complexity. An attacker will have the incentive to attack such an agent with the goal of significantly reducing its reward. In this case, we will still observe a notable reward drop before and after adding the restored trigger to the environment. We admit that the reward drop will not be that significant compared to a well-trained/near-optimal agent. That is, the metric value will not be very close to -1. But our method can still have the potential to identify the backdoored agent from the clean agent by setting a less aggressive threshold.
To verify this, we conduct an extra experiment using the Atari-SpaceInvaders game. We first prepare $20$ agents. $5$ agents are well-trained clean agents, $5$ agents are sub-optimal clean agents, $5$ agents are well-trained backdoored agents, $5$ agents are sub-optimal backdoored agents. For all the backdoored agents, we only use the successfully attacked ones. That is, their performance will drop to almost zero after observing the trigger. Otherwise, we treat it as an unsuccessful attack. We mix them together and then apply BIRD to detect the backdoored agents. We vary the detection threshold $\epsilon=-0.5/-0.6/-0.7/-0.8/-0.9$ and report the detection performance. As we can see from Table 1 in the submitted pdf file, as the $\epsilon$ increases, we are able to capture more and more sub-optimal backdoored agents. But we will not include any false positives. We thank the reviewer again for pointing this out. In the next version of the paper, we will clarify that we will not select an overly aggressive detection threshold in case there are some sub-optimal agents that are backdoored.
In addition, we acknowledge that, in general, it is tricky to determine the optimal detection metric. As such, we design our final removal step with the consideration that it will not minimize the effect on the agent’s performance in a clean environment. With that said, conservatively, we can apply the restoration and removal step to all the given agents. In this experiment, as shown in Table 2 in the submitted pdf file, after applying the removal step to all the 20 agents, we can observe that (1) For clean agents, their performance is only marginally affected; (2) For the backdoored agent, their performance in the clean environment keep similar, and their backdoor is removed. This shows that even our detection step fails to capture some backdoored agents. With a conservative solution, the removal step can still remove the backdoor for both optimal and sub-optimal agents. We will emphasize this in the next version.
**Second, the reviewer also raises the concern that whether our method will affect the agent’s performance in a clean environment.**
Thank the reviewer for pointing this out. First, we want to clarify that our restoration and detection step does not vary with the given agent’s policy and thus will not affect its performance. Our removal step indeed may affect the agent’s performance in the clean environment. As mentioned in Section 3.4 Lines 257-264, we add an additional constraint to avoid this in our retraining objective function. As demonstrated in Table 1 in the paper, for all the selected games, BIRD only introduces negligible performance differences in the clean environment after the removal step. Our extra experiment in Table 2 in the submitted pdf file also demonstrates similar efficacy for suboptimal agents and clean agents. We will emphasize this in the next version.
---
Rebuttal Comment 1.1:
Title: Follow up with the reviewer
Comment: Thanks the Reviewer V9nG again for the insightful and positive comments. Since the discussion phase is about to end, we are writing to kindly ask if the reviewer has any additional comments regarding our response. We are at their disposal for any further questions. | Rebuttal 1:
Rebuttal: We thank the reviewers for the constructive feedback. We addressed all the comments. Below, we summarize our responses:
We have added all experiments mentioned by reviewers (All the results are in the submitted PDF):
1. We demonstrated the effectiveness of BIRD in detecting poisoned/backdoored agents with a sub-optimal performance by varying the detection threshold (Reviewer V9nG).
2. We demonstrated the effectiveness of BIRD’s backdoor removal for suboptimal agents (Reviewer V9nG).
3. We added the ROC curve for the detection results in Fig.2 in the main text (Reviewer Mdit).
4. We compared BIRD with two additional baselines that tailor existing methods for RL setup. We showed BIRD is still better than these two adaptive baselines (Reviewer Mdit).
5. We tested BIRD against a more recent backdoor attack and demonstrated its effectiveness (Reviewer jZ8t).
6. We demonstrated the BIRD can be adapted to a scenario where the value function is not available (Reviewer M2QE).
7. We showed that BIRD is not that sensitive to the changes in $\epsilon_1$ (Reviewer M2QE).
We have clarified all the questions from reviewers:
**Reviewer V9nG**
1. We clarified that by selecting the proper detection threshold, our method can detect agents with a sub-optimal performance (in complicated games).
2. We demonstrated that our method has a marginal impact on the agent’s performance in the clean environment.
**Reviewer Mdit**
1. We followed the reviewer’s suggestion and added the ROC curves.
2. We followed the reviewer’s suggestion and added two additional baselines that are more suitable for the RL setup.
**Reviewer jZ8t**
1. We followed the reviewer’s suggestion and tested BIRD against another recent attack.
2. We clarified that our method focuses on robustifying a trained agent rather than designing a robust agent training algorithm.
**Reviewer M2QE**
1. We first clarified assuming the availability of value function is a reasonable and practical assumption. We further demonstrate that BIRD is still effective when the value network is not available.
2. We followed the reviewer’s suggestion and added an additional experiment that demonstrates the insensitivity of BIRD to $\epsilon_1$.
3. We clarified our method of identifying and reinitializing neurons during the backdoor removal.
4. We followed the reviewer’s suggestion and update an imprecise description and provide a negative broader impact.
We hope this summary can facilitate the reviewers' evaluation and discussion of our paper. We are at your disposal for any further questions. In addition, We would appreciate it if you could kindly consider updating your scores if our rebuttal has satisfactorily addressed the concerns. Thank you again for your time and consideration.
Pdf: /pdf/1e2b00df4672b642630c0e58c3b91c606a2e5570.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Scan and Snap: Understanding Training Dynamics and Token Composition in 1-layer Transformer | Accept (poster) | Summary: The key contribution of this paper is that the training dynamics of a 1-layer Transformer model is demystified theoretically and empirically. The authors have found that the self attention operator performs a discriminative scanning algorithm on the input tokens, attending more on distinct tokens and focusing less on common tokens. They also demonstrate that there is a phase transition to an attention snapping step.
Strengths: - The Transformer's self attention operator has been effective yet a black box in many applications. The authors' attempt in providing a theoretical explanation of the learning process is valuable.
- A simplified setting helps better understanding.
- Clear and straightforward analysis that aligns well with the paper's findings.
Weaknesses: I do not see any critical weakness in this paper. However, more analysis might be needed on multi-layer Transformer settings. Although the authors have provided the attention patterns of a multi-layer model in Figure 7, it only covers a single layer in the model. Considering that most transformer-based models have multiple layers, it would be interesting to see if there are any different learning patterns in different layer depths.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: Extending the discussion on multiple-layer settings from the weakness section,
1. Will there be a specific pattern regarding the phase transition across different layers? e.g. slower phase transition in the last layer etc.
2. Is there a possibility that each layer may specialize on different roles? e.g. initial layers "scan" while latter layers "snap"
3. (optional) Will these findings be consistent across different domains that use Transformer models? e.g. computer vision models, graph transformers etc.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 4 excellent
Limitations: The authors have implicitly discussed the limitations. However, the broader impact was not discussed. Please add a broader impact section.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the insightful and encouraging comments! We appreciate the suggestions! Here are the answers to the questions.
> Attention patterns in multi-layer model
We haven’t systematically analyzed the attention patterns in multi-layer models yet. Initial experiments show that the attention score in the top-layer is not as sparse as the bottom-layer attention patterns. We will leave it for future work.
> Will there be a specific pattern regarding the phase transition across different layers? e.g. slower phase transition in the last layer etc.
We haven’t done multilayer analysis systematically yet. The intuition is that the sparsity attention patterns in the lower-layer will lead to combinations of tokens that co-occur a lot, which then are further combined in the higher layer. So it is possible that top layer attention will freeze after bottom layer attention (and slower phase transition), since they provide the building blocks for the top.
> Is there a possibility that each layer may specialize in different roles? e.g. initial layers "scan" while latter layers "snap"
Yeah it is possible that different layers are in different stages, as mentioned in the previous answers.
> (optional) Will these findings be consistent across different domains that use Transformer models? e.g. computer vision models, graph transformers etc.
It is possible since for different domains, the token combinations hierarchy are different. E.g., in vision, each patch may carry less amount of information, information tends to spread evenly across the image, and a deep and balanced hidden hierarchy is needed; while in language, a few tokens may carry the majority of information, and tokens may have nonlocal relationships, etc. Graphs from different domains may contain very different feature structures. Our goal is not to design specific models for specific domains, but to characterize how features are learned in (multi-layer) Transformer architecture, under different data distribution.
> Please add a broader impact section
Thanks for your suggestions. Here is one tentative version:
Our work gives a framework to analyze the dynamics of a 1-layer transformer using the tools of nonlinear dynamics systems. Our finding that self-attention pays sparse attention to a subset of distinct tokens that co-occur a lot with the query can have many implications. First, it helps understand the inductive bias of self-attention, reduce spurious correlation (by removing tokens that are not likely to be attended to) and improve generalization. Second, it can also help understand the mechanism of hierarchical token combination, and potentially open the door of understanding multi-layer transformers. The two stage training process, namely “scan and snap”, can also be an important tool to understand how the multi-layer transformer works in practice.
---
Rebuttal Comment 1.1:
Title: Thank you for the rebuttal
Comment: It's a shame that the authors did not provide quantitative analyses to support their responses, but I understand that analyzing multi-layer settings would require more than a week. Other than that, the authors tried to provide nice intuitions for my questions. Thank you.
Also, while I do agree with the other reviewers that a 1-layer setting might not be enough to fully understand how Transformer works, this paper is definitely a good stepping stone for demystifying the Transformer model. I don't think I've seen a paper that does this, and I believe this paper deserves attention.
---
Reply to Comment 1.1.1:
Title: Thanks for your support!
Comment: We really appreciate your support! We hope this paper can shed some lights on the underlying mechanism of self-attention, in particular towards the quantitative and rigorous descriptions on where its sparsity comes from and how it is related to data distribution. Our work also contributes to the novel techniques used to analyze the behavior of self-attention and neural networks. It is a hard journey but we will keep trying.
A systematic understanding of multi-layer settings would require a nontrivial amount of additional work and is out of the scope of this submission. Here we show our (ungrounded) intuitions to the best of our knowledge to quench the needs of the reviewers. We will try our best to show some initial empirical results during the discussion period (while it is not guaranteed).
Thanks again! | Summary: This paper analyzes a simple architecture of 1-layer transformer’s SGD training dynamics for the task of next token prediction. The authors prove that self-attention acts as a discriminative scanning algorithm, with an inductive bias to favor unique key tokens that frequently co-occur with the query tokens.
Strengths: This work aims at understanding how Transformers work, and tries to understand the training dynamics in a simplified setup with 1-layer transformer. The work conducted both theoretical analysis and empirical experiments.
Weaknesses: 1. While it is an important research question to understand how Transformers work, this work only offers some observations regarding how attention weights change over time on two different types of context tokens. The setup is too simplistic to generalize to real-world scenarios. The conclusions depend on a lot of assumptions made in the paper.
2. Several assumptions made in the theoretical analysis are not clearly justified:
a) The decoder layer learns much faster than the self-attention layer. The learning rates are hyper-parameters and in typical settings, they are the same for the decoder layer and the self-attention layer.
b) The weak correlation assumption in Assumption 2 is not explained at all under what condition it is valid.
3. The empirical experiments don’t share the same conditions as the theoretical analysis: e.g. batch size 1 assumption and much larger learning rate in decoder layer than self-attention.
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: 1. The assumptions used in the analysis need further justifications regarding the learning rate of Y and Z, and the weak correlation assumption in Assumption 2.
2. The Residual connections are such a key component of Transformers. Neglecting them in the analysis leaves the conclusions questionable in generalizing to the actual architectures used in Transformer experiments.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 3 good
Contribution: 2 fair
Limitations: It is still unclear how the conclusions and observations from this paper helps to understand how Transformers work. While the research direction is generally good, I don’t find much surprise in the result that Transformers sparsify and focus attention on unique tokens.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thanks the reviewer to give insightful comments!
> The assumptions used in the analysis need further justifications regarding the learning rate of Y and Z, and the weak correlation assumption in Assumption 2.
We explain the intuition of the Assumption 2 (Weak correlation) as below:
**Regarding upper bound**. $|\lambda_{\max}(E)| < 1/K$ ensures that $f_i$ are almost orthogonal to each other since $E_{ij} = f_i^\top f_j < 1/K$ holds for all $i \neq j \in [K]$. And this means that for different sequence classes, the conditional probability of each contextual token given the query/last token is quite different from each other. This scenario is common in real cases, in particular when the vocabulary size $M$ becomes large, and each sequence class may only relate to a small portion of the tokens.
**Regarding the lower bound**. $|\lambda_i(E)| \ge 6/\sqrt{M}$ illustrates that all tokens can be common tokens with small conditional probabilities, resembling a uniform distribution. This condition is only used in formula (93), which proves property (a) of Theorem 1. Specifically, $E_{nn}'$ and $u_{nk}$ in (93) are hard to be estimated, and we use this lower bound to prevent the worst case and prove that $\xi_n > 0$ always holds.
Additionally, even if Assumption 2 doesn't hold, Theorem 5 in the Appendix shows that formula (9) and property (b) in Theorem 1 still hold. We just need Assumption 2 to ensure $\xi_n > 0$ and to maintain the favorable properties of distinct tokens as discussed in Sections 5 and 6.
> The empirical experiments don’t share the same conditions as the theoretical analysis. e.g. batch size 1 assumption and much larger learning rate in decoder layer than self-attention.
As mentioned in the reply to reviewer **sxfK**, Lemma 1 gives a gradient formula when batchsize = 1 for simplicity, and the gradient of batchsize > 1 can be easily obtained by summing over different samples, and our framework applies to large batch sizes as well. We indeed tested the case where the learning rate of the decoder is larger than self-attention (see Fig. 6) and the trend is consistent with our theoretical findings.
> I don’t find much surprise in the result that Transformers sparsify and focus attention on unique tokens.
Note that as mentioned in reviewer **TXYd**, previous and concurrent works [1][2], published in ICLR/ICML, gave theoretical justification on why Transformers gradually attend more to distinct key tokens, under SGD training of 1-layer transformer. In comparison, our work analyzes more rich phenomena in 1-layer transformers that can be categorized as frequency and discriminative bias, which has not been brought up before. For example, we analyze multi-class settings, we connect sparse attention patterns with the co-occurrence frequency of contextual token and query, characterize relative growth in details for all tokens that are relevant to the class label (rather than only compare relevant with irrelevant tokens [1][2]), characterize the complete training dynamics (rather than initial steps [2]) and summarize two stage behaviors of attention scores toward convergence, etc.
We want to emphasize that we want to provide a theoretical framework and lay the foundation of a rigorous theoretical understanding of Transformer. Most of the time there should be no surprise and the theory should explain what happens in practice, but in a more fine-grained and quantitative manner. Such quantitative explanations lead to new discoveries that could go beyond empirical findings and intuitions.
> Residual connections
First, note that none of the previous/concurrent works [1][2][3] analyze residual connections in their training dynamics. Here [1][2] are accepted in ICLR'23/ICML'23.
In addition, our framework can already incorporate residual connection as part of the input $f_n$ to the decoder layer $Y$ (Sec. 7). We could further characterize the scale of the coefficients $\beta_{nn’}$: when $\psi(n) \neq \psi(n’)$, i.e., the last/query tokens of seq class $n$ and $n’$ are different, $\beta_{nn’}$ should be much smaller. In this case, conclusions about distinct and common tokens can be further refined. For simplicity, the main paper focuses on cases without residual connection.
**Reference**
[1] Li et al., "A Theoretical Understanding of shallow Vision Transformers: Learning, Generalization, and Sample Complexity", ICLR 2023.
[2] Oymak et al., "On the Role of Attention in Prompt-tuning", ICML 2023.
[3] Tarzanagh et al., "Max-Margin Token Selection in Attention Mechanism", https://arxiv.org/pdf/2306.13596.pdf
---
Rebuttal Comment 1.1:
Comment: I appreciate the effort from the authors to explain the intuitions behind the assumptions, and point out the fact that under empirical study, the conclusions still hold even outside of the assumptions made in theoretical analysis. I would suggest the authors add some of the intuitions and justifications to the paper to make the paper easier to digest for readers.
However, I don't think it makes sense to justify this paper using the fact that other papers made similar assumptions were accepted earlier. Please focus on making this paper higher quality in terms of scientific merit and readability for the community.
Upon second reading the entire paper, I am convinced that it is scientifically valuable to theoretically/empirically observe: 1) distinct tokens gain attention weights (again, not surprising to me from a statistical learning point of view), and 2) the phase transition from a winner-take-all for the most distinct token to a (nearly) frozen sparse attention pattern with a logarithmical growth rate. In particular, 2) is quite interesting in itself, and I hope it can motivate more work along this line of research.
Therefore, I decide to revise my Rating to 6 and suggest the authors make some effort to clarify the intuitions behind the assumptions in the paper.
---
Reply to Comment 1.1.1:
Title: Thanks!
Comment: Thanks for your reply! We really appreciate that you think the work is scientifically valuable. We are happy to see that the score has been revised! We sincerely hope that this work can motivate more works along the line of research, which is precisely our motivation to start this project.
Thanks for your suggestions. We will focus on make this work higher quality from the scientific point of view. Since reviewer **TXYd** brought up the relevant works, we reference them accordingly, analyze them in details and position our work among the proper context of literature. Thanks reviewer **TXYd** for bringing these references up, which we missed in our literature review. | Summary: This paper theoretically studies the SGD training dynamics of a 1-layer Transformer. Based on some assumptions, this paper introduces the frequency bias and discriminative bias of Transformers. To be more specific, this work shows that Transformers gradually attend more to distinct key tokens and less to common tokens. Later this procedure ends with a phase transition with a fixed token combination. Experiments are provided to justify the findings. I am going to update my score after the rebuttal if there are any revisions in the manuscript (if possible).
---------------------------------------------------------------------------------
After rebuttal, I increase my score from 4 to 5.
Strengths: 1. As far as I know, the proof technique and the framework are generally novel, at least for the Transformer architecture.
2. Some high-level insights are well presented and intuitively correct.
Weaknesses: 1. Some parts of this paper are not very clear. For example, I feel the decoder layer is not formally defined. In around line 92, only the self-attention layer is introduced. Does the decoder layer refer to $W_V$? I thought $W_V$ usually refers to the matrix for the value embeddings.
2. Although this paper considers a different setup, such as a different Transformer formulation, loss function, and different assumptions, the conclusion that Transformers gradually attend more to distinct key tokens is not a new one ([1] and [2] in the following) in the theoretical works. Especially the sentence "we are the first to analyze the attention dynamics and reveal its inductive bias on data input, " in line 320 is an overclaim. Here are several recent works that also study the training dynamics of Transformers.
[1] Li et al., "A Theoretical Understanding of shallow Vision Transformers: Learning, Generalization, and Sample Complexity", ICLR 2023.
[2] Oymak et al., "On the Role of Attention in Prompt-tuning", ICML 2023.
[3] Tarzanagh et al., "Max-Margin Token Selection in Attention Mechanism", https://arxiv.org/pdf/2306.13596.pdf
I know [2] and [3] are concurrent works, while [1] is an older one about training dynamics. I would like to see a comparison and discussion between the manuscript and these works.
3. I suggest presenting a table of notations somewhere in the paper since there are many notations. For example, Section 5 is difficult to follow, although I can understand the conclusion.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: 1. Although discussing a different pair of $\eta_Y$ and $\eta_Z$ is interesting, how can it happen in practical SGD? Is it an assumption or an algorithm?
2. From my understanding, in this paper, the feed-forward layer after the self-attention layer of the Transformer is a linear layer. I am wondering whether the analysis can be extended to an MLP layer (even only one non-linear activation) with Relu or Gelu. I think feed-forward layers with non-linear activations are more common in practice.
3. I didn't fully check the proof. I found a possible typo in equation 15 in line 536. I think the last term should be $\log(\bf{1}^\top \exp(Y^\top LN(X^\top\bf{b}_T)))$.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: There is no potential negative societal impact of their work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks the reviewer for the insightful comments. Here are the answers.
> I feel the decoder layer is not formally defined. In around line 92, only the self-attention layer is introduced.
We define the decoder layer $Y$ and self-attention layer $Z$ after reparameterization $Y = UW_V^TU^T$ and $Z = UW_QW^T_K U^T / \sqrt{d}$. Then we study the dynamics of $Y$ and $Z$ directly.
> Overclaimed sentence "we are the first to analyze the attention dynamics and reveal its inductive bias on data input"
We acknowledge that previous and concurrent works[1][2][3] also show that self-attention attends to relevant tokens, and we will definitely reference the works and tune town our claim! In comparison, our work analyzes a lot more phenomena in 1-layer transformers that belong to frequency and discriminative bias, which has not been brought up before. For example, sparse attention patterns are connected with the co-occurrence frequency of contextual token and query, characterization of such connection over training including two stage behaviors of attention scores, etc. Please check the main rebuttal.
> A comparison and discussion between the manuscript and these works
Please check the overall rebuttal for an overall comparison. See below for detailed comparison:
**Comparison with [1]**
**Setting, Assumptions and Conclusions**. [1] analyzes the SGD convergence of 1-layer ViT model (1 layer self-attention + 2 layer FFN with ReLU, with the top layer of FFN fixed as random, token embedding fixed). Under a specific binary data model in which the data label is determined by counting the number of tokens that belong to pos/neg pattern, [1] gives a generalization bound when the number of hidden nodes in FFN is large, and at the same time, shows that the self-attention attends to relevant tokens and becomes sparse (if #relevant tokens are small).
In comparison, our work focuses on language models, assume broader data distribution (e.g., multiple classes, arbitrary conditional probability of token given class label) and incorporate layernorm naturally. We propose more detailed quantitative properties, e.g., attention sparsity even among relevant tokens, two-stage evolution of attention scores, with a much simpler analysis.
**Techniques**. The techniques used in [1] are based on feature learning techniques applied to MLP [R4 etc]. It identifies lucky neurons if the number of hidden neurons is large enough. In comparison, our framework and analysis is much simpler by leveraging that certain nonlinear continuous dynamics systems can be integrated out analytically to yield clean solutions (e.g., Theorem 3 (Eqn. 11) and Theorem 4 (Eqn. 127)), avoiding complicated bounds in [1]. This allows us to characterize the converging behavior of self-attentions when $t->+\infty$. To our best knowledge, our framework and techniques are novel, which is also acknowledged by the reviewer.
**Comparison with [2] (published on Jun. 6, after the submission deadline)**
[2] focuses on 1-layer attention-based prompt-tuning, in which some parameters of the models are fixed (Wp, Wq). The analysis focuses on the initial (3x one-step) SGD trajectory, and constructs the dataset model containing specific context-relevant/context-irrelevant data, and the context-vector indicates the token relevance. As a result, [2] shows the attention becomes sparse (i.e., attending to context-relevant tokens) over time, which is consistent with ours, and shows that prompt-attention can find the relevant tokens and achieve high accuracy while self-attention/linear-attention can’t.
In comparison, our work goes beyond the 2-classes model and further points out that the attention weight will be relevant to the conditional probability of the contextual tokens, which is more detailed than the sparse attention result in [2] that relies on the sparsity assumption of contextual tokens itself. We also focus on the pre-training stage (training from scratch, predicting the next token), characterize the entire trajectory under SGD for the self-attention layer, in particular its converging behavior.
**Comparison with [3] (published on Jun. 23, after the submission deadline)**
Compared to [2], [3] also analyzes the dynamics of the query-key matrix $W$ and the embedding of a single tunable token $p$ (often [cls] token). It makes connection between the binary classification problem with 1-layer transformer and max-margin SVM formulation, when the tokens are linearly separable. The dynamics is characterized completely, which is nice. Note here $p$ is not an attention since its norm can be shown to go to infinity over training.
In comparison, our work does not learn the embedding of an individual token, but focuses on the dynamics of (all-pair) attention scores during training. We also work on multiple-class setup and do not explicitly assume the linear separability among classes.
We thank the reviewer for the references and we will put the detailed comparison, as well as reference these papers in the next revision.
> Nonlinearity after the decoder $Y$.
Adding nonlinearity right before cross entropy loss will make our analysis a bit more complicated but not impossible. Specifically, the nonlinearity will modify the back-propagated gradient from cross entropy loss, and Theorem 1 will take a different (and maybe more complicated) form. For simplicity, we choose not to add the nonlinearity layer for this paper.
> Possible typo in equation 15.
We indeed missed the term $X^T$ within LN() and will fix.
**Reference**
[1] Li et al., "A Theoretical Understanding of Shallow Vision Transformers: Learning, Generalization, and Sample Complexity", ICLR 2023.
[2] Oymak et al., "On the Role of Attention in Prompt-tuning", ICML'23 (Jun. 6)
[3] Tarzanagh et al., "Max-Margin Token Selection in Attention Mechanism", arXiv’23 (Jun. 23)
[R4] Z. Allen-Zhu et al, Learning and Generalization in Overparameterized Neural Networks, Going Beyond Two Layers, NeurIPS’19
---
Rebuttal Comment 1.1:
Title: Thank you for the responses. Further questions here
Comment: Thank the authors for the responses. Here are some further questions.
I think the response to Weaknesses 3 about the presentation is missing. Actually, it is also one of my major concerns. I highly suggest a table to summarize the notations. Also, I suggest avoiding so many notations in the text and even in Lemmas and Theorems. It is better to show some informal lemmas first to make readers understand the conclusion being presented, especially for some lemmas. If the space is limited, the formal version of key lemmas does not have to appear in the main body. Can the authors show a table of notations in the response? I think there is no limit on the length of the response now. Also, I think this table can be put in the appendix of this paper.
The remaining questions are about the extension and the assumption of this work.
1. It states that relative positional encoding can be considered in this work. Do you mean the formulation in this way: a trainable $b$ for $softmax(k_i q_l+b_i)$? What about an absolute positional encoding, such as $(v_i+b_i)softmax((k_i+b_i)\cdot (q_l+b_l))$? Or can you give a brief discussion on the formulation of absolute positional encoding you can handle?
2. For the positional encoding, since you assume the length of the sequence is infinite, how do you define or initialize the positional encoding? Is the positional encoding still meaningful? I think I cannot tell whether a token is far or close to the query if the length is infinite.
3. Why do you need the infinite length positional encoding assumption? From my understanding, by this assumption, you can study the transformer based on some stable properties. Is that right? I guess the usage of the analysis of nonlinear dynamic systems is also a reason, but I am not so familiar with that. Can you briefly discuss how to relax the analysis to a finite sequence length?
4. I think this work may have the potential to study some out-of-distribution generalization of the language models as future work. Can you provide a discussion?
For other answers, I am generally satisfied, especially with the comparison with other works. A minor point is that I don't think [R4] is a feature-learning work. I think it is still NTK work. From my understanding, feature learning works have a strong assumption of the data features, based on which they can derive stronger conclusions. Also, the returned model may change much from the initialization.
For questions raised by other reviewers, I just took a look very briefly. I don't think a one-layer Transformer is a big issue. The analysis of multi-layer fully-connected networks is already challenging, let alone Transformers. The theoretical work of Transformer just started one year. It is better for researchers first to figure out the mechanism of one-layer cases.
---
Reply to Comment 1.1.1:
Title: Notation table
Comment: We appreciate the reviewer for positive feedbacks, in particular acknowledging that our comparisons with existing works are satisfying and 1-layer setting for Transformer is an important direction to work on.
We will answer the questions below.
## Notation table
We apologize for the missing answers regarding to the notation table, due to limited space of the rebuttal. Please check below for the notation table, which will be put in the appendix. In our next revision, we will follow your advice to include an informal lemma/theorems to give the reader an overall picture in the main text. We are sorry for the possible confusions.
| Notation | Description |
|-----------|---------------|
| $M$ | vocabulary size |
| $T$ | sequence length |
| $\mathbf{e}_k$ | One-hot vector (1 at component $k$) |
| $X \in \mathbb{R}^{{T-1}\times M}$ | Input sequence (of length $T-1$) |
| $\mathbf{b}_T \in \mathbb{R}^{T-1}$ | Vector of self-attention weights to predict token at time $T$ |
| $\mathbf{x}_t \in \mathbb{R}^M $ | contextual token ($0 \le t \le T-2$) (one-hot) |
| $\mathbf{x}_{T-1} \in \mathbb{R}^M$ | last/query token (one-hot) |
| $\mathbf{x}_T \in \mathbb{R}^M $ | next token to be predicted (one-hot) |
| $\mathbf{x}_t[i] \in \mathbb{R}^M$ | $i$-th training sample of token at location $t$ in the sequence |
| $K$ | Number of possible choices the next token $\mathbf{x}_T$ could take |
| $\boldsymbol{\alpha}(t)$ | Softmax score of the output layer |
| **Learnable parameters** | |
| $Y \in \mathbb{R}^{M\times M} $ | decoder layer parameters |
| $Z \in \mathbb{R}^{M\times M} $ | self-attention logits |
| $\mathbf{z}_m$ | $m$-th row of $Z$ (i.e., attention logits for a query/last token $m$) |
| **Hyperparameters** | |
| $\eta_Y$ | Learning rate of the decoder layer |
| $\eta_Z$ | Learning rate of the self-attention layer |
| **Token Types and Distribution** | |
| $\psi(n)$ | Mapping from next token $\mathbf{x}_T = n$ to its unique last/query token |
| $\psi^{-1}(m)$ | The subset of next tokens for last/query token $\mathbf{x}_{T-1}=m$ |
| $\mathbb{P}(l\|m,n)$ | Conditional probability of contextual token $l$ given last token is $m$ and next token to be predicted as $n$. |
| $G_{CT}$ | Subset of common tokens |
| $G_{DT}(n)$ | Subset of distinct tokens for $\mathbf{x}_T = n$ |
|**Attention Score** | |
| $\mathbf{\tilde c}_n \in \mathbb{R}^M$ | Unnormalized attention score given next token $\mathbf{x}_T = n$ |
| $\mathbf{c}_n \in \mathbb{R}^M$ | $\ell_1$-normalized attention score given next token $\mathbf{x}_T = n$ |
| $\mathbf{f}_n \in \mathbb{R}^M$ | $\ell_2$-normalized attention score given next token $\mathbf{x}_T = n$ |
| $\mathbf{g} \in \mathbb{R}^M$ | Back-propagated gradient for $f_n$ |
| $F$ | Input matrix of the decoder layer. Each column of $F$ is $f_n$ |
|**Self-attention dynamics**| |
| $r_{l/l'\|n}(t) $ | Relative gain between distinct token $l$ and $l'$ for next token $n$ |
| $B_n(t)$ | Growth factor bound of the relative gain |
| $\gamma(t)$ | Speed control coefficient |
To make things easy to read, we will answer the remaining questions shortly in the next comment. | Summary: This paper presents a rigorous mathematical analysis of the training dynamics of a 1-layer Transformer architecture without positional encoding for the task of next token prediction. The authors demonstrate that the self-attention mechanism in the Transformer exhibits a discriminative scanning algorithm that gradually focuses on distinct key tokens while paying less attention to common key tokens. The paper also shows that the self-attention layer undergoes a phase transition controlled by the learning rate of the decoder layer, resulting in a stable token combination. The authors verify their findings on synthetic and real-world data (WikiText-103).
The reviewer didn't check the proof in mathematics for this manuscript.
Strengths: 1. The paper provides a formal and mathematically rigorous analysis of the training dynamics of 1-layer Transformer models, contributing to a better understanding of how these models work.
2. The paper demonstrates the impact of the learning rate on the phase transition in the self-attention layer and its influence on the final token combination.
3. The authors verify their findings on both synthetic and real-world data, which strengthens the validity of their conclusions.
Weaknesses: 1. The experimental part of the paper is limited in scope, focusing on 1-layer Transformer models without positional encoding and not addressing more complex architectures.
2. The paper's assumptions (no positional encoding, long input sequence, and faster learning in the decoder layer) may limit the generalizability of the findings to a broader range of Transformer models and tasks.
3. In the training and fine-tuning of the Transformer model, Adam/AdamW is used more often than the SGD analyzed in this paper.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: N/A
Confidence: 1: Your assessment is an educated guess. The submission is not in your area or the submission was difficult to understand. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: addressed
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the insightful comments!
> The experimental part of the paper is limited in scope, focusing on 1-layer Transformer models without positional encoding and not addressing more complex architectures.
We emphasize that most of the experiments are focused on verification of the theory. Also we tried a 3-layer transformer on WikiText-2 and showed the behavior of attention sparsity predicted by the theory. Overall our work is mainly theoretical and we focus on checking whether our theoretical prediction is correct or not, so we mainly focus on 1-layer transformer.
> Adam/AdamW are used more often in practice
Note that previous and concurrent works [1][2], some works published in ICLR'23/ICML'23, also focus on the setting of SGD training in 1-layer transformer. While Adams are used in practice, SGD is a base case that we want to understand well first.
Also, in this work, we empirically show the behavior of Adam optimizer in our synthetic setting (Fig. 5), which is indeed different from SGD and has very interesting behaviors. In Adam, the frequency bias seems to be controlled not only by the co-occurrence frequency of key and query tokens, but also controlled by Adam’s learning rate. This could explain why we need cosine learning rate for Adam, in order to sweep through possible co-occurrence frequency pairs. Note that this phenomenon is not mentioned by previous works, to our best knowledge, and can trigger interest in the community.
We leave a rigorous analysis on Adam to future work.
> Assumptions may limit the generalizability
We want to emphasize that these assumptions are reasonable. People already show decoder only approaches also work reasonably well in no positional encoding [B] and currently both GPT4/Claude and the LLM community is now exploring much longer sequence input [C,D] (e.g., 32k, 65k or longer). We explained the third assumption ($\eta_Z \ll \eta_Y$), which is a technical assumption for the theorems, in the overall rebuttal.
**Reference**
[1] Li et al., "A Theoretical Understanding of shallow Vision Transformers: Learning, Generalization, and Sample Complexity", ICLR 2023.
[2] Oymak et al., "On the Role of Attention in Prompt-tuning", ICML 2023.
[3] Tarzanagh et al., "Max-Margin Token Selection in Attention Mechanism", https://arxiv.org/pdf/2306.13596.pdf
[B] A. Kazemnelad et al, The Impact of Positional Encoding on Length Generalization in Transformers, arXiv'23
[C] S. Chen et al, Extending Context Window of Large Language Models via Positional Interpolation, arXiv'23
[D] J. Ding et al, LONGNET: Scaling Transformers to 1,000,000,000 Tokens, arXiv'23
---
Rebuttal Comment 1.1:
Title: Let us know if you have more questions.
Comment: Dear reviewer zgiA, the deadline of discussion period is approaching. Please let us know if you have any further concerns regarding to our work. Thanks!
---
Rebuttal Comment 1.2:
Title: Re: Rebuttal by Authors
Comment: Overall, the reviewer is satisfied with the author's response. The rating has been increased accordingly. | Rebuttal 1:
Rebuttal: We thank all reviewers for their insightful feedback.
We are glad to hear that reviewers agree that a rigorous framework/analysis of the training dynamics of Transformer is valuable, interesting, novel and timely[**sxfK**, **TXYd**, **nH6i**], with clear high-level intuitions [**TXYd**, **nH6i**] and experiments on both synthetic and real-world datasets [**zgiA**]. We will address the common questions below and will reply to each reviewer about answers to their detailed questions.
**[ZSxf, zgiA, nH6i] Regarding the setting of 1-layer transformers**. We observe a mixed point of view around this aspect. Some reviewers think it is too simplistic (**ZSxf**, **zgiA**) while others (nH6i) think it makes the overall picture clear and helps understanding.
From our point of view, analyzing a 1-layer transformer is a necessary step. As shown in our paper, there are many interesting and non-trivial behaviors even in this apparently simple case. To see this, let’s check previous and concurrent works [1][2][3] listed by reviewer **TXYd** asking for comparison. [1][2][3] are important works, [1][2] published in ICLR'23/ICML'23, and all focus on 1-layer transformers ( = 1 layer self-attention + 1 FFN). A common theme is that they show self-attention in1-layer attends to relevant tokens during training under various settings and data models (e.g., when the model is fixed and we fine-tune on soft-prompts), therefore, even with 1-layer, many phenomena are nontrivial and deserve dedicated work published in top-tier conferences.
**Our contributions**. Compared to [1][2][3], our work is novel in the following ways:
+ Among relevant/distinct tokens of a sequence class, we characterize their relative growth quantitatively, showing that self-attention attends to *a subset of distinct tokens* that co-occur a lot with the query, leading to attention sparsity even among relevant tokens. In comparison, [1][2] only show high attention scores for relevant tokens and thus attention sparsity relies on number of relevant tokens in the assumption.
+ We characterize complete training dynamics for multi-class settings, by *integrating nonlinear dynamics analytically*. We also summarize two stage behaviors of attention scores (scan & snap) toward convergence. In comparison, [1] relies on a much more complicated technique and [2] focuses on initial gradient steps and [3] assumes linear separability of tokens generated from two classes and uses max-margin SVM framework.
We list the detailed comparison in the our rebuttal to reviewer **TXYd**.
Note that even with 1-layer, our work is already criticized as very dense (**sxfK**). In fact, it can be **unrealistic** to expect a rigorous and detailed formulation of a multi-layer transformer in a single 9-page conference paper, and at the same time, with a rich connection with real-world applications. It'd better to focus on simplified settings to make the take-home message clear. Therefore we focus on 1-layer Transformer and leave its multi-layer extension as the future work. We really appreciate reviewer **nH6i** for the understanding!
**[zgiA, nH6i] Future directions to address multi-layer cases**. For multi layer, an important component is how the input tokens are combined together to form high-level concepts during training. Our work shows that the training leads to sparse attention even among relevant tokens, and demonstrates that there is a priority in token combinations for 1-layer attention based on their co-occurrence: even if 10 contextual tokens are relevant to the query, the self-attention may pick 1-2 token to combine first due to attention sparsity. This can be regarded as a starting point to study how tokens are composed hierarchically. In comparison, showing that attention attends to all relevant tokens [1][2][3] may not suggest a hierarchical / multi-layer architecture, which is used in practice.
**[ZSxf, sxfK, zgiA] Concerns on the strength of assumptions.** Understanding transformers in a mathematically rigorous manner is a highly nontrivial problem, and the assumptions we made are comparable with or even weaker than previous works. For example, [R1] analyzes positional attention with symmetric initialization, without considering input data. [1] models the data distribution as discriminative/non-discriminative patterns, similar to ours, assume hinge loss (rather than cross entropy loss), and perform SGD training. [2] also models the data distribution as relevant/irrelevant. [R3] models transformer dynamics near the initialization and freeze many parameters at random initialization. In comparison, we characterize the entire training dynamics with a few well-defined assumptions. We will list the detailed comparison in the reply to reviewer **TXYd**.
**[ZSxf, TXYd] Fast training of the decoder Y than the self-attention layer Z ($\eta_Z \ll \eta_Y$)**: This is a technical assumption in order to obtain Theorem 1, which establishes the relationship between the input signal $f_n$ of the decoder Y and the back-propagated gradient $g$, once the decoder $Y$ has been sufficiently trained. We indeed see works that use different learning rates at different layers in the empirical studies, e.g., [R2].
As a future work, we are actively looking for better approaches that can get rid of Assumption 2 and separate learning rate in the analysis.
**References**
[R1] S. Jelassi et al. Vision transformers provably learn spatial structure. NeurIPS’22.
[R2] E. Dinan et al. Effective Theory of Transformers at Initialization
[R3] A. Bietti et al, Birth of a Transformer: A Memory Viewpoint
[1] Li et al., "A Theoretical Understanding of Shallow Vision Transformers: Learning, Generalization, and Sample Complexity", ICLR 2023.
[2] Oymak et al., "On the Role of Attention in Prompt-tuning", ICML 2023 (Jun. 6)
[3] Tarzanagh et al., "Max-Margin Token Selection in Attention Mechanism", arXiv’23 (Jun. 23) | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: The paper studies the training dynamics of single layer Transformers. They identify a certain scan and snap procedure of the Transformer that learns a winner-take-all solution given certain data statistics / training dynamics or the self-attention learns to combine tokens.
They accompany their theoretical results and analyses with empirical results.
Strengths: I salute the effort of the authors to mathematically study the training dynamics of Transformers. This is very interesting and timely. I think that the authors put in a lot of effort into this study to work out quite interesting results.
Weaknesses: The paper is very dense, hard to parse and therefore difficult to understand. Although I am aware of the difficulty to present theoretical results in a comprehensible manner, I urge the authors to invest time into the presentation of the work.
In general, there are various assumptions made along the paper which is fine if justified or discussed. See questions.
Note that I read the paper a couple of times and still have major difficulties to obtain an overview of the results.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: Again, in general I found the assumptions very strong and I felt that these shortcomings could be a bit better addressed in the experimental section.
1) Lemma 1: gradient dynamics of batchsize 1 - you are assuming here training the Transformer on a single example only? This feels very restrictive.
2) Lemma 1: Y and Z are assumed to be independent but depend partially on the same parameter matrices. This feels again like a very strong assumption.
3) Can you elaborate on why you choose to integrate layernorm in your analysis? Could simplify your analyses if not important for your results, or did I miss their importance for your analyses?
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 2 fair
Limitations: I believe assumptions and limitations are discussed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the comments! Here are the answers:
> Lemma 1: gradient dynamics of batchsize 1 -> Transformer trained with single example only.
First of all, in machine learning, training with batchsize = 1 means that in each gradient step, only one sample is used to update the gradient, it does not mean we train on a single example only, since each time we could use a different sample from the dataset.
Furthermore, the fact that Lemma 1 takes the form of batchsize = 1 does not mean it cannot be applied to batchsize > 1: we can simply sum over sample index i (omitted in Eqn. 3) to get the gradient of Y and Z.
> Lemma 1: Y and Z are assumed to be independent but depend partially on the same parameter matrices. This feels again like a very strong assumption.
This is a common reparameterization technique used when analyzing Transformer and many previous theoretical works, including works [2] suggested by **TXYd**, leverage this technique. In [2], key/query matrices are merged to one (in their Eqn. 4), q is defined as W_kW_q^T P^T (in their Eqn. 4) and its dynamics is computed instead of the dynamics of prompts embedding P. In [3] the key-query weights are merged into one matrix W and its dynamics is studied. In [A], similar to ours, they use X to replace QK^T as the variable to be optimized and study the property of X in optimization instead.
> Why choose to integrate layernorm in your analysis?
LayerNorm plays an important role in our analysis. LayerNorm provides the additional projection operator $P^\perp_{X^T b_T}$ (as shown in the proof) that will cancel out one term in the derivative of softmax (line 539 in supp material), leading to much simpler analysis (e.g., Eqn.95 in supp material). We will make it clear in the revised version.
**Reference**
[1] Li et al., "A Theoretical Understanding of Shallow Vision Transformers: Learning, Generalization, and Sample Complexity", ICLR 2023.
[2] Oymak et al., "On the Role of Attention in Prompt-tuning", ICML 2023 (Jun. 6)
[3] Tarzanagh et al., "Max-Margin Token Selection in Attention Mechanism", arXiv’23 (Jun. 23)
[A] S. Li et al, The Closeness of In-Context Learning and Weight Shifting for Softmax Regression, arXiv
---
Rebuttal Comment 1.1:
Title: Thank you
Comment: Thank you for your response and note that this was an emergency review only providing me very limited time to review. I apologize for the very shallow review. I will discuss with the other reviewers once the rebuttal phase end and for now only increase my score slightly.
Many thanks again
---
Reply to Comment 1.1.1:
Title: Thanks!
Comment: Thanks for your valuable time! We understand that the review process can be time-limited and stressful. Any comments and feedbacks are welcome and we will address them to the best of our knowledge.
Let us know if you have more questions and we would greatly appreciate if you are confident in further increasing the score. Thanks again. | null | null | null | null | null | null |
Feature Likelihood Divergence: Evaluating the Generalization of Generative Models Using Samples | Accept (poster) | Summary: The paper suggests a sample-based method (termed FLS) for evaluating generative models. Similarly to FID and IS, the method uses a pre-trained network (InceptionV3 or CLIP) for image representation. Unlike FID, the proposed metric is based on a variant of KDE - fitting isotropic Gaussians around each generated sample and choosing variance values that maximize the likelihood of a subset of train samples. The metric is then the test likelihood. The authors claim and demonstrate that FLS is good at measuring the fidelity, novelty and diversity of generated samples and specifically has the advantages of being able to detect overfitting and memorization.
Strengths: Significance: The ability to quantitively assess the quality of image generation models is highly important, especially due to the recent success and impact of "generative AI". There hasn't been much progress in this area in recent years - FID was introduced in 2017 and is still the most commonly used evaluation method and the de-facto standard. Works in this area therefore have high potential impact and should be encouraged. The proposed method seems to have advantages over FID (e.g. the ability to detect memorization and over-fitting).
Clarity: The paper is very clearly written and is easy to follow. The motivation is clearly stated and the method is well explained.
(I do however have some comments about missing information, in the following sections)
The proposed method of using a KDE-like approach rather than some divergence seems to be original and well motivated.
Weaknesses: Missing information: It is stated that FLS uses "Inception-V3 or CLIP" feature space, however it wasn't clearly indicated what feature space was actually used in each experiment (at least, I couldn't find this information). In addition, I would expect to see an analysis of the different behavior of these two representations (pros and cons of each).
Additional information that I could not find was the number of samples used for FLS evaluation and how it compares with the number of samples needed for FID. This specifically raised a concern because KDE methods (fitting a Gaussian around each sample) in general require a large number of samples. I would also expect that the ability to detect overfit depends on the number of used samples.
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: About the correlation between FLS and Sample Fidelity (4.1). Can the authors explain what causes this correlation and the different behavior than in FID? Is it the embedding network or the different approaches to compare distributions?
Have the authors tried evaluating more recent generative models (e.g. Stable Diffusion) on larger datasets (e.g. LAION)? Showing that FLS can scale to these models and dataset can strengthen the significance of the paper.
The proposed method seems to be somewhat related to the NDB evaluation method proposed in [1].
[1] E Richardson, Y Weiss, On GANs and GMMs, Advances in neural information processing systems, 2018
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 3 good
Contribution: 3 good
Limitations: A discussion about limitations of the proposed method seems to be missing. One example of such possible limitation is the ability of FLS to scale up to modern image-generation models (e.g. trained on the LAION dataset), generating a much larger variability of images (can the KDE-like approach capture this diversity ?)
I could not identify a possible negative social impact.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their thoughtful comments and feedback and we value the fact that the reviewer felt “the ability to quantitively assess the quality of image generation models is highly important” and that works in this area have a “high potential impact”. We also are thrilled to hear that the reviewer found our motivation as being “clearly stated”; that our paper was “well-written” and that our method was “well-explained” and has “advantages over FID”.
### Missing Information
We thank the reviewer for raising the concern about which feature space was used in our experiments. For our experiments—for a fair comparison with FID which uses Inception-v3 — we also use Inception-v3 except for the ImageNet experiments which use CLIP (due to Inception-v3 being trained on ImageNet).
Recent work has also demonstrated that Dino-V2 is a compelling alternative feature space for images, and we provide results on using this feature space in our 1-page PDF. We will add these results to our appendix and explicitly indicate which of Inception, CLIP or Dino-V2 is used where.
### Number of Samples used for FLS
We thank the reviewer for pointing out the importance of the number of samples used for evaluation. For most of our experiments, we use 10k train/test/gen samples (except Table 1 where we use the full train set and 20k generated samples).
Importantly, in Fig 10, we find that FLS is effective and unbiased even when using less test samples, of particular importance for applications in low data regimes or even conditional generation (e.g. the test set of CIFAR10 only contains 1k images for each class). On the other hand, FID highly recommends a minimum of 10k [1]. Generally however, the whole training set is used as the value is still biased at 10k and a noticeably lower score can be obtained by using more samples.
Similarly, most papers use 50k generated samples as it gives a lower FID than with 10k. We find that using 10k samples for FLS is sufficient for robust evaluation which is in contrast with the number of generated samples used by FID.
### Correlation between FLS and Sample FID
We thank the reviewer for their great question. The difference is not due to embedding networks as both FLS/FID are using Inception-V3 for this experiment. There are two ways the behavior is different. For imperceptible transforms, both are affected due to the imperfectness of the feature space. We posit that these transforms affect FID more as they affect the entire distribution of generated samples. As FID examines the distance between distributions, this distribution-wide shift likely has a larger effect (for example, to the covariance matrices of the Gaussians). Conversely, in FLS, the pairwise distances are only slightly increased in the case of these smaller transforms. For Figure 5, we believe the non-linear effect better matches our intuition about the ranking of models and their use in downstream tasks (more details in the global response).
### Scaling to Text-Image Datasets
We value the reviewer's feedback on using FLS to evaluate recent generative models, such as Stable Diffusion on LAION. This is indeed an interesting direction of investigation! However, we believe such experiments are beyond the scope of this current paper as Stable Diffusion and models of this type are multi-modal generative models—i.e. text and image—and this presents fresh challenges. For example, we would need to approximate $p(x|y)$ for an unseen text prompt $y$ and it is not immediately clear how this can be accomplished. Note that this is a different setting than the image conditional experiments we considered in Tab. 2 where $y$ is from a known and finite set of class labels. Furthermore, the datasets we chose to evaluate largely are those examined by prior works on evaluation of non multi-modal generative models. Due to these complexities, we believe LAION and Stable Diffusion type models require an extended investigation which is deserving of its own paper.
### Method is similar to NDB evaluation
We thank the reviewer for pointing out this interesting reference which we will include in our updated manuscript. Their method, instead of estimating densities like FLS, performs a statistical test on the number of samples in various Voronoi cells for the training/generated sets. The Voronoi cells are constructed by performing K-means clustering of the training set (all of this is done in the pixel space). As a whole, the process is quite different from FLS.
Nonetheless, it is interesting to note that they also use mixtures of Gaussians but for a completely different purpose (as the generative model).
### Discussion on Limitations
The reviewer raises a valuable point by adding more discussion regarding the potential limitations of FLS. We agree with the reviewer that such a discussion can help practitioners effectively use FLS for evaluating their own generative models. As we outlined in our response to Reviewer cRFC we find that FLS is less sensitive to the scenario where there is both a little bit of memorization and the generative model still generalizes well (according to definitions 3.1 and 3.2). However, in cases where there is only memorization FLS still correctly reflects this in its score.
While we acknowledge the reviewer's comment regarding a limitation of FLS to larger-scale datasets like LAION we believe this is not a pertinent limitation (as we argue in our response above). We believe that the linear computational complexity of FLS in the number of samples—i.e. $O(n)$ is in line with the best one can hope for any evaluation metric that operates purely on samples. We will add a bigger discussion of these points in the main paper.
We thank the reviewer for their time and effort in reviewing our work, and we hope our efforts to clarify the main points and allow the reviewer to consider improving their score.
[1] https://github.com/bioinf-jku/TTUR
---
Rebuttal Comment 1.1:
Comment: I thank the authors for the thorough rebuttal. Specifically, the fact that FLS does not require an especially large number of samples and is an unbiased estimator is reassuring. I still think handling the new large generative models and datasets would have increased the impact of the method, but I agree with the authors that this can be future work. | Summary: The work proposes a new metric (FLS) for image generation tasks that is motivated by the observation that current metrics consider the quality and the diversity of the samples, but not their novelty, and therefore are not penalized by memorization of the training set.
The proposed method instead encompasses all three aspects in a single evaluation.
To compute the score, they extract features from the generated, training, and test images. They then build a mixture of isotropic Gaussians centered around the embeddings of the generated samples. The variances are optimized to maximize the log-likelihood of samples of the training set. FLS is then the likelihood of the test samples.
The properties of the metrics are demonstrated by evaluating a number of well-established GAN models.
Strengths: - The paper is well-motivated. While memorization was not much of an issue for generative models so far, some recent study indicates that this might be the case now with the introduction of new powerful generative models. A holistic evaluation including novelty would be a useful tool going forward.
- The principles behind FLS are well-grounded in already established methods (FID, Precision/Recall) and the changes are sound and motivated.
- FLS is not biased by the number of samples and can therefore be compute on a smaller number of points than FID.
- Experiments are extensive, covering many common use-cases including truncation trick, and show promising results. In particular, FLS correlates well with other metrics when memorization is not an issue and the overfitting detection experiments in the main paper and the supplementary are convincing enough.
Weaknesses: A very important aspect is the reliability of FLS w.r.t the number of evaluation samples. As with FID, in practice, people will use the metric in diverse settings and it is important to know when the metric is reliable and when it isn't. The paper touches upon these questions in Fig. 10 but could be more complete. In particular:
- Since the models we compare display very little overfitting, Fig 10 only evaluates the robustness of FLS on the Quality and Diversity aspects, not on the Novelty aspect. As novelty evaluation is one central selling point of the metric, information about the reliability of the overfitting detection aspects is necessary.
- For the Quality and Diversity aspects, FLS should also be compared to KID, which tackles this same issue of sample size bias. When evaluating with a smaller number of samples, standard deviation should also be provided to better assess the test size that would be needed in practice. Some intervals are depicted in Fig10 but they are barely visible and not commented on at all.
FLS might be harder to interpret intuitively than FID. There is a chance that it could lead to misuse or misunderstandings down the road.
- In the results presented Fig5, FLS is less linear than FID w.r.t. the % of perturbed data. I would guess this effect is caused by the fact that the variance is adaptive, leading to complex interactions between the samples. This could make the interpretation of the values even less intuitive, as we are more used to linear scales.
- In the same figure, until larger percentages of perturbed data, random rotation, elastic transforms, center crops and bicubic resizes all seem to yield the same FLS, while FID assigns different values to different perturbations. As far as I'm concerned, FID results seem more aligned with my intuitions than FLS.
Other weaknesses:
- FLS might be unsuited to easier tasks. Most well-performing models are nearly indistinguishable by FLS in Table 2 (conditional cifar10).
- The computation of FLS has more steps than FID, making it more cumbersome to use. It is also more compute-demanding when scaling.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: My main concerns are, for the sake of practical usage, about the assessment of the minimum number of samples needed:
- Can the authors provide a reliability assessment for overfitting detection in few-sample settings?
- Can the authors update Fig.10 with KID and standard deviations?
- Accordingly, can the authors provide recommendations in terms of the minimal number of samples for reliable assessment?
Related to Fig5 and the related discussion in Section 4.1:
- the authors mention that “FLS is noticeably less affected”, but it is not that clear since we don't have a scale on which to compare the two metrics. Can the authors explain more explicitly how they derive this observation?
As an additional question, one aspect that can be useful in some cases is adversarial perturbation detection.
- How do the authors anticipate the metric to behave when faced with adversarially perturbated images? Considering it is supposed to be more robust to noise, it might not be affected by such perturbation. This can be a good or bad thing depending on the task to be evaluated, but useful to know regardless.
As a final comment, because the paper is proposing a holistic metric, I believe it might be important for the paper to include, in a separate section, a discussion summarizing good practices and recommendations about when and how to use it.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Limitations are adequately discussed in the paper and broader impacts in the supplementary.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their time and detailed review of our work. We are glad that the reviewer finds our paper “well-motivated”, “well-grounded”, and that the holistic evaluation of generative models as we do by introducing FLS is a “useful tool going forward”. We also appreciate that the reviewer thinks that our empirical results are supported by “extensive” experiments and that the overfitting detecting experiments are “convincing enough”. We now address the key clarification points grouped by theme.
### Effect of dataset size on FLS (and comparison with KID)
>For the Quality and Diversity aspects, FLS should also be compared to KID, which tackles this same issue of sample size bias.
We have run the experiment and will include an updated version of Fig.10 with KID in the final version of the paper and a complete description of the figure (currently, the confidence intervals are 95% confidence intervals of the bootstrapped distribution). They are very small, essentially invisible for FID. The KID results can be summarized as follows:
- KID exhibits little to no bias at all dataset sizes but more variance than FLS
- FLS exhibits some bias at very small dataset sizes (<500) and more variance than FID.
KID correctly addresses the sample size bias issue of FID but still has the same problem of FID when it comes to detecting overfitting.
### FLS might be harder to interpret intuitively than FID.
> In [...] Fig5, FLS is less linear than FID w.r.t. the % of perturbed data.
We agree that this is an important point and believe it a non-linear relationship makes sense given the x-axis of the figure. Specifically, a model that produces perfect samples 10% of the time and poor samples 90% of the time has learned to generalize significantly better and is more useful than one that produces poor samples 100% of the time. This argument is detailed further in the global response.
### FLS might be unsuited to easier tasks.
> Most well-performing models are nearly indistinguishable by FLS in Table 2 (conditional cifar10).
We thank the reviewer for pointing out this mistake on our end. The Table 2 in the paper contains unnormalized NLLs (i.e. before subtracting C and multiplying 100). When fixed, the table contains differences in value of similar scale to Table 1. We will fix this small error in the final paper.
### FLS is more expensive computationally
> The computation of FLS has more steps than FID, [...] It is also more compute-demanding when scaling.
In practice, we find that FLS requires slightly more time to compute than FID at large dataset sizes (as shown in a Appendix F) with computation time scaling linearly with the number of samples. Nonetheless, we believe this is a non-issue as the main bottlenecks are mapping samples to the chosen feature space (~10x slower than computing FLS) and producing samples (up to 100x slower for certain diffusion models).
As for ease of use, all code will be available on GitHub with an easy to use interface for practitioners (provide a folder of images, tensor of features or function that produces samples and the FLS is returned).
> Can the authors provide a reliability assessment for overfitting detection in few-sample settings?
We agree with the reviewer that the few-sample setting is an important area for overfitting detection. While we do not currently have a full reliability assessment, in Appendix B.3. we evaluate StyleGAN2-ADA models trained on datasets of varying sizes (500-4000). There, we find that the % of overfit Gaussians identifies overfitting not detected by other methods.
> Fig5 and the related discussion in Section 4.1: the authors mention that “FLS is noticeably less affected”,[...] Can the authors explain more explicitly how they derive this observation?
Fig. 4 of the rebuttal provides a clearer figure illustrating this behavior. For the transforms which have very little visual effect on humans, the effect on FLS is relatively small whereas for FID, it leads to a significant increase where EDM samples are rated similarly to those produced by SNGAN.
> How do the authors anticipate the metric to behave when faced with adversarially perturbated images?
That is an interesting idea! While we have not tested with any adversarially perturbed image, in light of the above (i.e. the effect of imperceptible transforms), they might be less affected by these perturbations than FID. However, FLS is dependent on the feature embeddings: adversarial perturbations with respect to the Inception-v3 will have a larger effect. This could be tackled by using the features of robust models (e.g. [1]).
> include, [...] a discussion summarizing good practices and recommendations about when and how to use it.
Thank you for the great suggestion. We will add the following list of good practices to the paper.
- Keep a separate test set for evaluation. Many generative modeling datasets do not have an explicit test set (FFHQ, LSUN-Bedroom, or LAION [2]).
- Recommended to use **a minimum of 1k test samples**, preferably 10k.
- Recommended to use **a minimum of 10k generated samples** preferably 50k.
- Look at FLS but also the generalization gap (difference between Train/Test FLS) and the percentage of overfit Gaussians to identify potentially problematic overfitting.
- Visually inspect the maximally overfit samples as a means to qualitatively evaluate overfitting behavior.
We thank the reviewer for their valuable feedback and great questions. We hope that our rebuttal fully addresses all the important salient points raised by the reviewer and we kindly ask the reviewer to potentially upgrade their score if the reviewer is satisfied with our responses. We are also more than happy to answer any further questions that arise.
[1] Wang, Zekai, et al. "Better diffusion models further improve adversarial training." 2023.
[2] Schuhmann, Christoph, et al. "Laion-5b: An open large-scale dataset for training next generation image-text models." 2022.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for the rebuttal and appreciate their consideration of my remarks.
My concerns have been convincingly addressed, and I believe the new additions that emerged will be valuable.
In particular, I agree with the authors that linearity w.r.t. the percentage of perturbed data is not that important, especially compared to the results in Figure 1 of the rebuttal.
Among other things, I also appreciated the recommendations that look reasonable, the updates on KID, and the discussions w.r.t. imperceptible perturbations.
From the responses to other reviews, the DINO features experiments in particular would be a very nice addition, providing new insights for generative model evaluation in general beyond this specific method.
I will take these elements into account in my final assessment along with the updates from other reviewers before the end of the discussion period.
Update: I am raising my rating from WA to Accept | Summary: The paper proposes an approach for evaluating generate images by fitting a mixture of Gaussians (MoG) to feature embeddings extracted from the generated and real images. For this, images are first mapped to a feature space (e.g. via an Inception or CLIP image encoder) and then map a Gaussian distribution to each image feature with the mean being centered at the specific image feature. The variance of each Gaussian is then optimized via NLL such that the MoG fits the training examples or a subset thereof.
The MoG can then be used to evaluate image memorization (any Gaussian with near-zero variance will be a memorized sample) and overfitting (if the MoG assigns higher likelihood to the train set than to an unseen test set), as well as sample fidelity and diversity.
The experiments show that the new metric is more robust to small transformations in image space than FID and generally correlates well with FID, precision, and recall.
Strengths: The approach aims to improve upon existing and well-used metrics such as FID and precision+recall. As such, it is easy to calculate, needs minimal human intervention, and scales to large numbers of images.
It also aims to improve the evaluation by detecting overfitting/memorization which many current metrics can't detect. Finally, it can provide more detailed insights into models by evaluating the quality of specific classes individually. The approach also seems to be more robust to using smaller sample sizes (e.g., <5K samples), whereas other metrics such as FID need tens ouf thousands of samples.
Weaknesses: While the approach promises some improvements over other metrics the main improvement seems to be being able to detect overfitting and memorization. However, I am not convinced that feature embeddings are the best way to detect memorization. E.g., looking at the memorized results in Fig 8, those results don't look like memorization to me. Were these obtained based on Inception or CLIP embeddings?
More detailed insights into the performance of specific classes is also already possible with other metrics, e.g., conditional FID.
Aside from that the main other advantage seems to be that the approach is more sample efficient but that is a minor difference since calculating FID and other metrics is usually not the computationally intensive part of model training and evaluation.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: I also wonder about the relationship between this approach and precision+recall. Couldn't precision+recall also be used for detecting memorization and overfitting by applying a similar approach as this one here, applied to the distances already calculated by precision+recall? Would that be simpler? Would the results agree with the results of this metric?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: My main concern is that the most obvious improvement of the new evaluation method, namely easily detecting memorization and overfitting, may not work as expected since it relies on image embeddings from feature extractors that were not trained for this (neither Inception nor CLIP were trained to detect memorization and instead focus more on high-level features). I would not define image in Fig 8 as memorizations.
Aside from that, other improvements/differences seem marginal compared to existing metrics such as FID and precision+recall and it's not clear to me what exactly makes the new metric better than, e.g., FID.
---
I have adjusted my score to WA based on the author's rebuttal.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their time and feedback on our manuscript. We are happy to hear that the reviewer views FLS as “easy to calculate” and that it can scale to a “large number of images”. We also are pleased by the reviewer stating that FLS aims to evaluate the overfitting/memorization behavior of generative models, a fact that “many current metrics can’t detect”. Finally, we thank the reviewer for acknowledging that FLS is “more robust” to smaller sample sizes compared to FID. We now address the main comments and questions raised by the reviewer.
### Detecting memorization using FLS
> I am not convinced that feature embeddings are the best way to detect memorization […]
We acknowledge the reviewer's healthy skepticism regarding the memorized examples shown in Fig. 8. Our current Fig 8 was obtained using Inception features. We agree that these features may not be ideal and many works have indeed revealed their limitations [1, 2, 3]. While raw L2 distances have shown some promise at detecting memorized examples [4] they are also unreasonably sensitive to imperceptible transformation [5].
We investigate the effect of using a more modern feature extractor in DINO-V2 [3,6] combined with a slightly modified ranking. These results can be found in our 1-page rebuttal PDF in Fig 2. We believe the presented samples are undeniable evidence of copies. We provide more details about this additional experiment in the global response. Specifically, many of these copies (e.g. the car that appears many times in the CIFAR10 training set) exhibit a slightly modified scale/lighting or background shading than the training samples which would lead to a higher pixel-wise distance than their distance in a sufficiently strong feature space. Given this compelling evidence, we believe FLS using DINO-V2 as a feature space is capable of effectively detecting copies.
### Using precision+recall to detect memorization
> Couldn't precision+recall also be used for detecting memorization […] Would the results agree with the results of this metric?
That is an excellent question. FLS bears similarities to a continuous/smooth version of recall. Unfortunately, directly applying recall as in [8] to the test set (i.e. using the generated samples to get nearest-neighbor (NN) distances) yields the same problem as FID: a model that generates exact copies outperforms SOTA models in recall (0.74 vs 0.70 for EDM).
If we understand correctly, the reviewer is instead referring to using the train samples to get NN distances (similarly to FLS) for the manifold of generated samples.
While this form of recall would be simpler to compute, it presents a key issue addressed by FLS. First, fitting on the train set unfairly rewards bad models: poor quality samples are far from the train set resulting in the construction of balls with larger radii and the engulfing of more test samples. As a result, many models obtain very similar recalls for $k=1$:
- EDM: 0.41
- StyleGAN2: 0.4
- BigGAN-CR: 0.4
- SNGAN: 0.39
A similar phenomenon occurs for $k > 1$, for example at $k=4$, used in [6]. The binary nature of the recall metric (a sample is either inside or outside the ball) is problematic here. Contrastingly, the likelihood assigned by the MoG in FLS smoothly takes into account the distance of that sample to the MoG. As for precision, it is not immediately clear to us how it could be used to detect overfitting but we are happy to discuss and test other suggestions.
### Other Questions
> More detailed insights into the performance of specific classes is also already possible with other metrics, e.g., conditional FID […] not the computationally intensive part of model training and evaluation.
In certain situations (e.g. small test set for class conditional evaluation), the main bottleneck is not compute, but rather dataset set size. FID is biased and thus requires a large number of samples. For instance, for CIFAR10, practitioners usually consider the FID between 50k generated samples and the training set. Borji [7] mentions that ``A major drawback with FID is its high bias. The sample size to calculate FID has to be large enough (usually above 50K)’’. This is particularly problematic for conditional FID (e.g. CIFAR10 has 5k train + 1k test samples per class while ImageNet has ~1k train + ~150 test), conditional FID would yield a highly biased estimate of the full FID.
### Main improvement
We believe that Fig. 2 in the 1-page PDF which uses the amended memorization metric combined with the DINOv2 feature space convincingly demonstrates the ability to detect memorized samples. In addition, even in Inception space, we provide solid evidence that FLS is capable of detecting overfitting. Fig. 7 shows that FLS is highly affected by the addition of copies and these copies are detected by the percent of overfit gaussians. In App B.1, we show it does so better **even when compared to metrics designed specifically to detect overfitting**.
We thank the reviewer for their time and effort in reviewing our work and we hope the reviewer would kindly consider a fresh evaluation of our work given the main clarifying points outlined above.
[1] Zhou, Sharon, et al. "Hype: A benchmark for human eye perceptual evaluation of generative models." 2019.
[2] Kynkäänniemi, Tuomas, et al. "The Role of ImageNet Classes in Fr\'echet Inception Distance." 2022.
[3] Stein, George, et al. "Exposing flaws of generative model evaluation metrics and their unfair treatment of diffusion models." 2023.
[4] Carlini, Nicholas, et al. "Extracting training data from diffusion models." 2023.
[5] Schäfer,F, et al "Implicit competitive regularization in GANs." 2019.
[6] Oquab, Maxime, et al. "Dinov2: Learning robust visual features without supervision." 2023.
[7] Borji, Ali. "Pros and cons of GAN evaluation measures: New developments." 2022
[8] Kynkäänniemi, Tuomas, et al. "Improved precision and recall metric for assessing generative models." 2019.
---
Rebuttal Comment 1.1:
Title: Thanks
Comment: I thank the authors for their extensive response. Using Dino features instead of Inception features does indeed seem to work much better, especially for very close neighbors. The response regarding precision+recall addressed my thoughts and I also especially like Fig 1 of the rebuttal and its implications. I have adjusted my score to WA. | Summary: This paper addresses the problem that there are currently no sample-based evaluation metrics accounting for the trichotomy between sample fidelity, diversity, and novelty. Likelihood based metrics are not particularly interpretable and sample quality based metrics do not take into account novelty -- they are easy to cheat just by copying the training data. To address these issues, the paper proposes the feature likelihood score (FLS) -- which assesses sample novelty, overfitting, and memorization.
Strengths: * The paper deals with an important problem and proposes an innovative solution.
* The paper provides theoretical guarantees that the proposed FLS score can detect overfitting in Proposition 1.
* The shows evaluations on a variety of datasets -- CIFAR10, ImageNet, LSUN and AFHQ and on a variety of models -- StyleGAN-XL, SNGAN, LOGAN, LSGAN etc.
* The paper is well written and easy to understand.
Weaknesses: * Can the proposed FLS score be extended to Text to image models such as Stable Diffusion or Latent Diffusion.
* The datasets considered such as CIFAR10, ImageNet, LSUN and AFHQ are quite limited in size compared to popular datasets such as LAION. The paper should include an analysis of the effect of dataset size on the ability of FLS to detect generated copies of the training set.
* The paper should also include an analysis of the effect of image resolution on the FLS score -- as the image resolution increases will the FLS score be able to detect copies of the training set images?
* In Figure 7 (right) why does the FID increase for some models when the % copied samples increases.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: * The paper should motivate in more detail why the CIFAR10, ImageNet, LSUN and AFHQ datasets were choosen as the evaluation platform.
* Is the FLS score applicable to text to image models? Can the FLS model detect duplicates in case of text to image models?
* Does the FLS show the same behavior on very large datasets such as LAION as on smaller datasets such as CIFAR10 or ImageNet?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The paper does not discuss the limitations of the FLS score in detail. The paper should ideally also failure cases to highlight limitations -- where copied training examples do not degrade the FLS score -- if observed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their time, feedback, and positive appraisal of our work. We are heartened that the reviewer feels that FLS tackles an “important problem and proposes an innovative solution”. We also appreciate that the reviewer finds our evaluation extensive in the number of datasets and models while stating our manuscript is “well written” and “easy to understand”.
### Extending FLS to Text-to-image
This is a great suggestion! Evaluating multi-modal generative models is an exciting direction for extending FLS. However, multi-modal models present several technical challenges as we need to estimate $P(x|y)$ for unseen $y$ (unrestricted text prompts) of which usually a single $(x,y)$ is available. This is in stark contrast with evaluating class conditional image generative models as we do in Table 2 in our manuscript, as the classes $y$ are known $\textit{apriori}$. Thus while interesting, we believe extending FLS to multi-modal models like latent diffusion deserves its own paper as it requires non-trivial considerations whose solutions are not an immediate application of the conditional FLS.
### Datasets Considered for Evaluation
We appreciate the reviewer’s concern regarding the datasets used for evaluation. We believe our selection of datasets is relatively standard for image-generative models and is consistent with the primary ones used in the literature of non-multi-modal generative models [1, 2, 3]. Furthermore, such datasets already have pre-trained models, which enable us to perform a large-scale evaluation on a variety of popular generative models (e.g., GANS, Diffusion). Using pre-trained models on standard datasets also ensures fairness and consistency with models evaluated using other measures in the literature. We understand that this motivation may not have been initially evident in the manuscript, and we will update it to highlight this aspect. As for dataset size, in Figure 10 and Appendix B.3, we demonstrate the applicability of FLS in low-data regimes. The effect of FLS versus dataset size. For larger datasets, while we have experiments for ImageNet, LAION is typically not used for non-multi-modal generation. Nonetheless, we believe the aforementioned non-trivial extension of FLS could be used to evaluate generative models trained on LAION and datasets of similar size.
### FLS vs. Image Resolution
We value the reviewer's comment regarding the ability of FLS to detect copies as the image resolution increases. We believe this is a non-issue primarily because FLS operates on the representation space of a pre-trained encoder (e.g., Inception, CLIP, Dino-V2) with a fixed dimensionality regardless of the original input image dimension. Consequently, computing FLS is unaffected by the input image resolution, which we demonstrate on a range of datasets with small resolutions 32x32 for CIFAR10 to AFHQ, consisting of high-quality images of resolution 512x512. For each dataset, we effectively detect overfitting (see Appendix B.2). We hope this allays the reviewer's concern as we argue FLS is independent of image resolution provided a sufficiently rich representation space.
### Fig 7
The increase in FID is due to the copied samples consisting of transformed versions of the trained samples. For sufficiently strong transformations, the decrease in quality has a more substantial effect than overfitting (considered beneficial by FID). Please see our global response for a detailed discussion on how fidelity, diversity, and novelty affect FLS.
### Discussion of Limitations
We acknowledge the reviewer's comment on adding a more detailed discussion on the limitations of FLS. We agree with the reviewer that such a discussion can add valuable nuance to our score's proper use and interpretation.
We find that FLS is less impacted if there is a combination of memorization of a subset of training examples and generalization (according to definitions 3.1 and 3.2). This can partially be seen in Figure 7, where a model producing great novel samples 10% of the time and producing copies 90% of the time still gets a reasonnable FLS for a heavy case of memorization. Nonetheless, in such a case, the overfitting is still detected by looking at the gap between the train and test FLS.
As discussed in Appendix G, our score is heavily influenced by the embedding we consider. We require good embeddings to assess fidelity, diversity, and novelty. For image domains, many efficient pre-trained models exist (CLIP, DINOv2), but this could be a challenge for other modalities such as audio or time series.
As discussed in Appendix F, our score scales reasonably with the dataset size (i.e., has a similar complexity to its competitors, such as FID). However, it may still be challenging to scale it to very large datasets with billions of training examples.
We will update the paper to highlight such scenarios and include a dedicated limitations section.
### Conclusion and references
We thank the reviewer for their valuable feedback and great questions. We hope that our rebuttal fully addresses all the important salient points raised by the reviewer. We kindly ask the reviewer to consider updating their score if the reviewer is satisfied with our responses. We are also more than happy to answer any further questions that arise. Please do let us know.
[1] Karras, Tero, et al. "Elucidating the design space of diffusion-based generative models." 2022.
[2] Sauer, Axel, Katja Schwarz, and Andreas Geiger. "Stylegan-xl: Scaling stylegan to large diverse datasets." 2022.
[3] Parmar, Gaurav, Richard Zhang, and Jun-Yan Zhu. "On aliased resizing and surprising subtleties in gan evaluation." 2022.
---
Rebuttal Comment 1.1:
Title: Update
Comment: Most of my concerns have been addressed, Thanks! Will keep my rating at Accept. | Rebuttal 1:
Rebuttal: We thank all reviewers for their thorough reviews and valuable feedback. We are encouraged that they found FLS well-motivated and that the holistic evaluation of generative models has “potential for high impact” (**HK8L, V3Uo**). We also thank the reviewers for viewing our paper as “well written and easy to understand” (**cRFc, V3Uo**). We are also pleased to hear that reviewers found our empirical investigation to contain “extensive experiments” on “SOTA models” on a variety of datasets (**euCM, cRFc, HK8L**). Finally, we appreciate that the reviewers found FLS easy to calculate (**SDnT**) and useful in ranking different models (**euCM**) while being robust to smaller sample sizes in comparison to FID (**SDnT, HK8L**). We now address the main shared concerns, grouped by theme below.
### Additional results on how diversity, fidelity, and novelty affect FLS
To better illustrate the effect of changes in these three aspects of evaluation, we examine how they influence the FLS of 10k CIFAR10 samples generated by EDM G++. We report these results in Fig 1 of the attached 1-page PDF, which shows a plot of the FLS values as heatmaps for each combination of the different axes—i.e., Fidelity vs. Diversity, Novelty vs. Diversity, and Fidelity vs. Novelty. We change the values of fidelity, diversity, and novelty in the following concrete ways:
- **Fidelity:** Increasing severity of Gaussian blur applied to all samples (as measured by the $\sigma$ of the blur).
- **Diversity:** Decreasing diversity by duplicating several times the same sample (e.g., five duplicate samples correspond to replacing the 10k samples with 2k different samples duplicated five times).
- **Novelty:** Increasing the amount of memorized samples (as measured by the % of generated samples replaced by copies of the training set).
In summary, Fig.1 indicates that for FLS, fidelity matters more than diversity which matters more than novelty, with a notable exception when almost all examples are copied where FLS produces among the worst scores. We argue this ordering is very much aligned with the potential usefulness of the generative model in a downstream task:
- If samples have poor fidelity, then regardless of their diversity and novelty, they will not be useful.
- If samples have poor diversity but good fidelity and novelty, removing duplicates yields a useful generative model.
- If there are some copies in the generated sample, it is serviceable as long as it is not the majority.
- However, if the generative model only generates copies, then the generated data is useless (since we already have the training set, we do not need copies of it)
Finally, if a large enough fraction of the generated data point has high fidelity, diversity, and novelty FLS would provide a relatively good score regardless of the other fraction of the generated data. This explains why FLS does not correlates as linearly as FID in Fig 5. We develop this aspect in the following section.
### FLS correlates with sampling fidelity not as linearly as FID
While we agree with Reviewer euCm and HK8L that linear correlation is helpful because it is intuitive, we argue that it is not clear that the most desirable behavior is a linear relationship between our score and **the fraction of perturbed data** (Fig.5 of our submission). In fact, we argue the non-linear relationship is a benefit of our score over FID. For example, consider a model that generates samples with poor fidelity 90% of the time and perfect data 10% of the time. Such a model should be considered significantly better than one that produces poor fidelity samples 100% of the time. The former has potential uses (especially if one can filter out the bad samples), whereas the latter is nearly useless. This aspect is directly captured by FLS—due to its connection to likelihood—which rates the fully copied model as significantly worse (e.g. Fig 5) whereas the more linear relationship found in FID only deems it slightly worse.
Finally, note that in Fig. 1 of the attached pdf, a linear relationship between FLS and the increasing severity of Gaussian blur applied to all samples can be noticed. Thus our score scales linearly with a global change in terms of fidelity (i.e. 100% of the data is similarly corrupted).
### DINO-V2 improves upon Inception features
To address Reviewer SDnT’s comment on the ability of the Inception feature to measure similarities (and thus detect copies from the training set), we conducted additional experiments using a more modern feature extractor in DINO-V2 [1,2], combined with a new ranking. These results can be found in our 1-page rebuttal PDF in Fig 2 and Table 1 (an updated version of Table 1 in the main paper). Visually inspecting the new memorized samples in Fig 2 we believe we have found undeniable evidence for problematic copies that are more visually striking than the ones initially presented in Fig 8. Given this compelling evidence, we believe FLS using DINO-V2 as a feature space is capable of effectively detecting memorized copies.
### Figure 3. FLS is less sensitive to imperceptible perturbations (for the same feature space)
Another benefit of FLS is that it is less sensitive to small, nearly imperceptible perturbations relative to FID. We make this explicit in Fig 3 of our 1-page PDF with Inception-v3 features for transformations corresponding to slight JPG compression, blurring, and posterizing. While FLS is slightly worsened, the FID of EDM samples reaches that of SNGAN.
We thank the reviewers again for their valuable time and feedback. We hope that we address all their questions with this global response and individual responses. We look forward to further discussion.
[1] Oquab, Maxime, et al. "Dinov2: Learning robust visual features without supervision." 2023.
[2] Stein, George, et al. "Exposing flaws of generative model evaluation metrics and their unfair treatment of diffusion models." 2023.
Pdf: /pdf/963b3db128dee5b242130316b514e97890f48ab1.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: Limitation of existing evaluation metrics
* Likelihood-based metrics rarely correlated with perceptual fidelity.
* Sample-based metrics are insensitive to overfitting. E.g., FID
* Copycat (=a model randomly outputs training set) outperforms SOTA generators in $\text{FID}_\text{test}$
Proposed evaluation metric: feature likelihood score (FLS)
1. Map generated / train / test images to an embedding (Inception-v3 or CLIP).
1. Initialize isotropic Gaussians centered at the generated samples.
1. Update the variance of above Gaussians to maximize the likelihood of training samples.
1. Compute the NLL of test samples on the Gaussians.
It measures
* Novelty = opposite of Memorization
* a generated sample is $\delta-$ memorized if NLL of the training set is $\delta-$ lower than the test set on the Gaussian of the generated sample.
* Diversity
* Empirically shown
* Fidelity
* Empirically shown
FLS is high when
* The generated samples have poor quality -> Gaussian centers are far from the test set -> high test NLL
* The generated samples do not cover the data manifold -> high test NLL
* The generated samples overfit to the training set -> high test NLL
Strengths: * It measures novelty, fidelity, and diversity of generated samples. (not fully explained below)
* FLS correlates to sample fidelity (actually, corruption) as FID does.
* It is a one-value metric that reflects three aspects. It is easy to rank different models.
* The authors provide the benchmark table of SOTA models.
Weaknesses: 1. A super-close literature is missing: [rarity score]
1. Some important definitions are missing. E.g., training/test split, L189 data manifold
1. FLS correlates to sample fidelity not as linearly as FID
1. The mechanism how FLS correlates with sample fidelity/diversity is not described (although straightforward).
1. It is a one-value metric that reflects three aspects. It is difficult to compare different aspects with same value.
1. FLSs for different k and C are not provided.
1. Train FLS and test FLS are not defined.
minor
1. Figure 5 middle shows FID but caption describes it differently.
1. Figure 5 right shows FLS versus Precision but caption describes it FLS versus Recall.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. Why should the SOTA generators be better than the copycat?
1. Could you clarify the difference between definition 3.1 and 3.2 other than the margin $\delta$?
1. How do diversity, fidelity, and memorization affect the metric? Is there a metric affects more than another?
To me, Q3 is the most important.
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Use cases on non-natural images are not shown (mentioned as future work).
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We want to thank Reviewer euCm for their feedback. We are glad that Reviewer euCm highlighted that a strength of FLS is that it “is a one-value metric that reflects three aspects [Fidelity, Diversity, Novelty],” making it “easy to rank different models.” We now address the specific comments points raised by the reviewer:
### How do diversity, fidelity, and memorization affect the metric?
We provide a detailed answer to this question in the global response to the reviewers. We also ran additional experiments that are presented in the 1-page pdf attached.
In summary, Fig.1 indicates that for FLS, fidelity matters more than diversity, which matters more than novelty, with a notable exception when almost all examples are copied where FLS produces among the worst scores. We argue this order is very much aligned with the usefulness of the generative model in a potential downstream task.
### SOTA generators vs. copycat?
The “copycat” generative model corresponds to a model that exactly copies the empirical data distribution. Such a model is useless as it cannot generate new samples from the data distribution and is unusable in most downstream tasks—e.g., data augmentation. Conversely, we know that data augmentation using diffusion models provides an improvement in terms of test accuracy [1,2]. A performance metric for generative modeling is useful if it is also a good proxy for the performance of downstream tasks. Hence, SOTA generators should be ranked better than the copycat model.
### Clarifications on Def 3.1 and 3.2
In Def. 3.1 we consider the (normal) distribution induced by a **single example** $x_j^{gen}$ and look at the difference between the likelihood of the train set $D_{train}$ and the test set $D_{test}$.
$$ p_{\hat \sigma}(x| x^{gen}_j) := \mathcal{N}( \varphi(x) | \varphi(x_j^{\text{gen}}),\hat \sigma_j^2 I_d)$$
It corresponds to whether this **individual** point ranks the train set more likely than the test set. (Thus it is a proxy for $x^{gen}_j$ being a memorized point)
In Def. 3.2 we consider the (MoG) distribution $p_{\hat \sigma}$ induced by our sampled generated points $D_{gen}$ (defined in Eq.1).
$$p_{\hat \sigma}(x| D_{\text{gen}}) := \frac{1}{m} \sum_{j=1}^m \mathcal{N}( \varphi(x) | \varphi(x_j^{\text{gen}}),\hat \sigma_j^2 I_d)$$
It corresponds to whether the generated distribution ranks the training set as more likely than the test set—which is a useful measure of detecting the degree of overfitting.
### Related work: rarity score
We thank the reviewer for pointing out this recent and relevant reference which we will include in the updated manuscript. Our work complements [3] with a critical difference: FLS evaluates models along the three axes we mention. FLS punishes models that overly memorize, while the rarity score [3] allows models that produce very novel/rare samples to be adequately recognized.
### Definitions of training/test split, data manifold, train/test FLS
In the definition section, we mention that $D_{train}$ were the samples used to train the generative model, and $D_{test}$ were not used at the training stage. We will clarify this in the revision of the paper.
Regarding “data manifold” L189, we will replace this ill-defined term with “data distribution”.
Train/test FLS refers to the score associated with the likelihood of the training/test set over our density model $p_{hat \sigma}$ (See Eq. 1 and 2 in the submission). Formally,
$$FLS_{train} = \text{FLS}(D_{\text{train}},D_{\text{gen}}) := - \tfrac{100}{d}\log p_{\hat \sigma}(D_{\text{train}} | D_{\text{gen}}) - C,$$
$$FLS_{test} = \text{FLS}(D_{\text{test}},D_{\text{gen}}) := - \tfrac{100}{d}\log p_{\hat \sigma}(D_{\text{test}} | D_{\text{gen}}) - C,$$
We will clarify that point in the revision of the paper.
### FLS correlates with sample fidelity not as linearly as FID
We thank the reviewer for this critical question. We address it in the global response.
### Mechanism on FLS and its correlates with sample fidelity/diversity
We describe in L188-191 of the main paper how FLS correlates with sample fidelity and diversity, “Poor sample quality leads to Gaussian centers that are far from the test set and thus a higher NLL. Similarly, a failure to sufficiently cover the data manifold will lead to some test samples yielding very high NLL.” We will clarify the phrasing in the final version.
### Using FLS (a one-value) metric to reflect three aspects
The field of deep generative modeling has been driven by the use of single metrics for evaluation as it allows ranking models and measuring the progress of the field. For example, FID (fidelity and diversity) has been used to assess the progress on image generation tasks from early GANs such as WGAN-GP to recent diffusion models such as EDM++. As shown in Fig.2 and Tab.1 of our pdf rebuttal, we see the capabilities of these models to memorize and overfit, a facet that is missing current evaluation metrics. That is why we believe FLS can be seen as a holistic score extending the purpose of FID to a third aspect—i.e., novelty.
### FLSs for different k and C are not provided
The $k$s in Fig. 4. are illustrative to show a visual example of density overfitting and are not used to compute FLS. The $C$, on the other hand, is a dataset-dependent constant used to make our score positive (see footnote 2 in main) and does not affect the relative values of FLS between models.
We appreciate the reviewer's feedback on our paper. We believe we have answered all the great points raised by the reviewer in our rebuttal and kindly request a reconsideration of the paper's score. We are also happy to answer any additional questions.
[1] Wang, Zekai, et al. "Better diffusion models further improve adversarial training." 2023
[2] Azizi, Shekoofeh, et al. "Synthetic data from diffusion models improves imagenet classification." 2023
[3] Han, Jiyeon, et al. "Rarity score: A new metric to evaluate the uncommonness of synthesized images." 2022
---
Rebuttal Comment 1.1:
Comment: I appreciate the rebuttal. My minor concerns are addressed.
One last thing I care the most is usefulness of the proposed metric related to W5 (one metric) and Q3 (impact of aspects to FLS). It is helpful that the rebuttal pdf reports how FLS changes along fidelity, diversity, and memorization. However, I am not sure if I will use FLS instead of FID unless I want a generative model for data augmentation for discriminative model. Different users have different needs. Users do not know which aspect is affecting FLS. As mentioned by the authors in the rebuttal, FLS might be the same for two models: one model produces 90% good images and 10% bad images with 20% memorization, and another model produces 80% good images and 20% bad images with 10% memorization. As the quality of generated images have been saturated recently, I think we need more specific measures for different aspects.
---
Reply to Comment 1.1.1:
Comment: We thank the reviewer for their prompt response and willingness to engage in discussion. We believe they bring up two very pertinent points.
## Utility of FLS over FID
> I am not sure if I will use FLS instead of FID unless I want a generative model for data augmentation for discriminative models. Different users have different needs.
We understand the stickiness of FID as a metric and appreciate its utility. In fact, in most scenarios, we show that FLS largely agrees with FID and yields a similar model ranking (Figure 9.). However, we point out multiple failure modes of FID in the paper and show how they are addressed by FLS.
- **Heavy bias for smaller samples**: FID displays heavy bias, even up to 50k samples. FLS on the other hand works better with smaller sample sizes which makes it significantly more amenable to evaluating class-conditional generation (where there are often less than 10k samples per class) and for finding problematic classes/issues with conditional alignment.
- **Insensitivity to overfitting/memorization**: FID as it is currently computed (comparing with train samples) decreases with memorization and even when compared to test samples, a model copying the training set gets a SOTA FID.
- **Sensitivity to imperceptible transformations**: Very slightly altered samples can yield considerably worse FID scores. As shown in Fig.3 of the rebuttal PDF, such transforms cause samples from a SOTA model to be rated as worse by FID than those produced by a 5 year old model. As such, small changes in image processing can have disproportionate impacts on FID.
As well as other issues [1, 2]. We believe it is problematic to continue using a metric with known issues, especially as these issues are becoming more common. As such, we believe that FLS should be favored in most cases and especially when working with smaller datasets. Moreover, note that in our latest experiment (Table 1 of the rebuttal PDF) we observe that FLS with DINOv2 showcases a trend of the superiority of diffusion models in comparison to SOTA GANs (such as StyleGANXL) which was not clear with FID.
## Multi-faceted evaluation
> Different users have different needs. Users do not know which aspect is affecting FLS[...] I think we need more specific measures for different aspects.
We completely agree that specific metrics (such as precision, recall, rarity score, etc.) are very valuable and should be used to evaluate specific aspects of the best-performing models, but such a diagnosis is complementary to benchmarking and ranking using a single holistic metric. In fact, the use of a single metrics by practitioners is undeniable:
- **Generative modeling for image data:** FID is by far the most used metric.
- **Machine translation:** Practitioners have been reporting the F-score [3, 4] which trades off precision and recall, or the BLEU score (a precision oriented metric).
- **Supervised classification:** Practitioners often use standard accuracy to evaluate their models even though some datasets have been saturated. Even for unbalanced dataset practitioners often report F1 score [5].
This can be explained by:
- Comparing two models with respect to several metrics is more challenging since they may be on the Pareto front ($\mathcal{R}^2$ (fidelity, diversity) or $\mathcal{R}^3$ (fidelity, diversity, novelty) are not totally ordered).
- When considering metrics such as precision and recall a model could over-optimize one which corresponds to a pathological behavior while still being on the Pareto front.
#### Flexibility of our Method
We would like to point out the flexibility of the feature likelihood methodology. We believe this form of density estimation allows for measuring various aspects of generative model evaluation not possible with FID:
- **Overfitting evaluation:** The % of overfit Gaussians and the generalization gap allow specifically for evaluating the overfitting behavior quantitatively. If the issue affecting FLS is overfitting, it can be picked up by these methods.
- Ranking of memorized samples provides a way to qualitatively evaluate memorization by the model
- **Fidelity evaluation**: If we estimate the density of the data distribution by using Gaussians centered at the test set, we can then use this to evaluate the likelihood of the generated samples (and thus find high-quality samples, such as in Appendix A.3).
- The likelihood of the whole dataset could be a quantitative measure of fidelity.
[1] Kynkäänniemi, Tuomas, et al. "The Role of ImageNet Classes in Fr\'echet Inception Distance."(2022).
[2] Parmar, Gaurav, Richard Zhang, and Jun-Yan Zhu. "On aliased resizing and surprising subtleties in gan evaluation." 2022.
[3] Van Rijsbergen, Cornelis Joost. "Foundation of evaluation." 1974.
[4] Derczynski, Leon. "Complementarity, F-score, and NLP Evaluation." 2016.
[5]Wang, Xudong, et al. "Long-tailed recognition by routing diverse distribution-aware experts." 2020. | null | null | null | null | null | null |
Memory-Assisted Sub-Prototype Mining for Universal Domain Adaptation | Reject | Summary: This paper aims to improve previous Universal Domain Adptation (UniDA) methods by further exploting the intra-class discrimination. For that, they propose a Memory-Assisted Sub-Prototype Mining (MemSPM) method. MemSPM learns to retrieve new task-oriented features given the input embedding features, and apply existing UniDA methods to the retrieving features. The paper also proposes an additional reconstruction task for the demonstration to the explainability of its proposed method as the authors claimed. Experiments on four datasets are conducted on three DA settings.
Strengths: Considering the effect of learning intra-class discrimination for UniDA is indeed an interesting idea to focus on, and such motivation is new in the UniDA community. By exploiting the intra-class structure, the proposed MenSPM is somehow novel to see.
Weaknesses: Although the motivation from exploiting intra-class structure is interesting to UniDA, the analysis and the evidences to support the effectiveness of such idea is not enough. This is mainly due to the following concerns.
1. Subclasses learning brings additional learning challenge and increases the learning cost to the problem, and not always the case that some classes have obvious subclasses, thus it is hard to say whether forcing subclasses learning would be beneficial to UniDA. To investivage this, I think it should have a solid analysis to the problem.
2. The proposed method introduces too many hyper-parameters to the leanning process, inlcuding $N$, $S$, $K$, $\lambda$, $\lambda_1$, $\lambda_2$, and $\lambda_3$, etc., and there have not sufficient studies to investigate those hyper-parameters for different datasets or tasks. Note that this is important in UniDA since there is no validation set for model selection. Therefore, it is hard to say whether the effectiveness of the method may come from hyper-parameters tunning.
3. Abalation studies are also not enough to understanding the effectiveness of different loss terms in Equation (8). Although improvements have shown when comparing to the DCC method, but to my knowledge with the CLIP models, a simple baseline of standard training on source data only may already outperform the proposed method. However, this is not compared in the experiments.
4. The results reported in the ResNet50 are meaningless since the proposed method do not run on this backbone. This is also a limitation of the proposed method.
5. The experiments to verify the effectiveness of the proposed idea only conduct on the DCC method, which is not enough.
The authors claim that the proposed method could make interpretability from Figure 3, but I do not know how it works for the explainability since reconstruction does not imply interpretability. A random noise could also reconstruct the input.
The loss of $\mathcal{L}_{cdd}$ is not illustrated in the paper. It is a bad way to let readers to understand it from other papers as it is not popular.
Some typos exist in the paper, and please carefully check if some formulas are presented correctly, e.g., Equations (2), (6).
Technical Quality: 1 poor
Clarity: 2 fair
Questions for Authors: All weaknesses listed above should be well addressed to improve the paper.
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 1 poor
Presentation: 2 fair
Contribution: 2 fair
Limitations: The authors have shown some limitations of the proposd method, but more should consider other that the method itself.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ##### Re_Q1:
Although the MemSPM has some additional learning challenges and costs, it is worth the cost because 4 commonly used datasets have significant concept shift in 90% of categories (Samples shown in **Figure 1(a)** ), and other larger dataset such as ImageNet even has more intra-class distinction. The MemSPM can learn the intra-class distinction and mine "sub-prototypes" for better alignment and adaptation. Moreover, our MemSPM will not force the sample of one category to a certain number of sub-classes. It is an adaptive process; the sample will have an unused memory item to represent when the sample has not paired sub-prototypes embedding from existing sub-classes.
***
##### Re_Q2:
We conducted ablation experiments on hyperparameters, the results of which can be seen in **Figure 3(b)**. We found that MemSPM was insensitive to N, while larger S values resulted in better performance. For these experiments, we discovered that we only need to set a large S and N, and this setting depends solely on the GPU memory.
We have conducted experiments on the contribution of each loss. The L_ce is for the classification that can not be removed. The L_cdd is removed and the model achieves **79.8%** which is **4.4%** lower than using the full loss function. After the experiment, we found that the coefficient of L_ce **($\lambda_{1} = 0.1$)** and the coefficient of L_cdd **($\lambda_{2} = 3$)**, which were provided by DCC had the best performance.
For reconstruction task loss (L_rec), it wasn’t a main part of our loss. And in our experiment, we find that L_rec makes a very small impact on the performance, which is just for visualization. As we are mainly concerned with hyperparameters of memory structure, these ablation results of loss had no space to list in. For hyperparameters of the loss function, we fixed them for all datasets. Therefore, setting the hyperparameters does not require hyperparameters turning and any information about the target domain. Thanks for the advice, we will add more details in the revised manuscript.
| Method | Backbone Pretrain | Ar2Cl | Ar2Pr | Ar2Rw | Cl2Ar | Cl2Pr | Cl2Rw | Pr2Ar | Pr2Cl | Pr2Rw | Rw2Ar | Rw2Cl | Rw2Pr | Avg |
|:-------:|:-------:|:-------:|:-------:|:-------:|:-------:|:-------:|:-------:|:-------:|:-------:|:-------:|:-------:|:-------:|:-------:|:------:|
| CLIP-Baseline |ViT-B/16 CLIP| 64.6 | 84.3 | 78.1 | 73.7 | 88.2 | 86.5 | 68.1 | 68.7 | **89.6** | 68.5 | 69.4 | 86.6 | 77.2 |
| DCC+MemSPM Without Lcdd |ViT-B/16 CLIP| 75.9 | 75.4 | 86.4 | 80.1 | 71.6 | 87.5 | 70.1 | **87.1** | 88.7 | 74.2 | 73.5 | 88.8 | 79.8|
| DCC+MemSPM |ViT-B/16 CLIP | **78.1** | **90.3** | **90.7** | **81.9** | **90.5** | **88.3** | 79.2 | 77.4 | 87.8 | **78.8** | **76.2** | **91.6** | **84.2** |
***
##### Re_Q3:
We have conducted experiments on the contribution of each loss. The L_ce is for the classification that cannot be removed. The L_cdd is removed and the model achieves **79.8%** which is **4.4%** lower than using the full loss function.
CLIP-based embedding does have some cross domains knowledge but it was still influenced by the larger domain gap. We have tested the simple baseline of CLIP on officehome reaching **77.2 %** (H-score), which still has **4.4%** lower than our MemSPM.
In **Table 2** and **Table 3**, the results of GLC, DCC, and MemSPM in the bottom all used the ViT-B/16 (pre-trained by the CLIP model), so the comparison is fair. The GLC was the SOTA result of UniDA and DCC is part of our code base. When these two methods are applied to CLIP encoder, they must perform better than a simple CLIP baseline.
***
##### Re_Q4:
Since these results of typical methods were presented in many related works, so we also keep them in the table for a complete comparison. We conducted experiments on GLC and DCC with ViT-B/16 (pre-trained by the CLIP model) for comparison. The GLC was the SOTA method in UniDA and the DCC was part of our code base, so the comparison is fair and effective.
***
##### Re_Q5:
The MemSPM indeed can be used in other methods, but we choose the DCC that most fits our motivation in the UniDA task. The experiments listed in **Tables 2,3,4, and 5** have proved the effectiveness of our MemSPM. Thanks for the advice. We will apply the MemSPM to other methods in the revision.
***
##### Re_Q6:
The embeddings used to reconstruct images are from memory items. The memory items are learned by comparing them with the input-oriented embedding, so they are very different from random noise. As shown in **Figure 3**, the t-SNE visualization shows that the sub-prototypes from memory have learned the sub-classes knowledge. So, the reconstruction of these sub-prototypes is used to show that sub-prototypes have learned the representative features of sub-classes.
***
##### Re_Q7:
Thanks for the advice. We will add more details of the L_cdd in the revised manuscript.
***
##### Re_Q8:
Thanks for the advice. We have carefully fixed these typos. | Summary: This work proposes to exploit the intrinsic structures for each class, where sub-prototypes are devised to associate domain-common knowledge for universal domain adaptation. Specifically, MemSPM employs a memory module to mine sub-class information, and a corresponding reconstruction module to derive task-oriented representations. Experiments on representative benchmarks are conducted to verify the effectiveness of the proposed approach.
Strengths: 1, This paper is generally well-written and easy to follow, and neat figures are presented to enable a more intuitive understanding.
2, The motivation for decoupling with subclass structures seems reasonable.
3, The technical details are well explained.
4, Surpassing previous methods with noticeable margins, justifying its effectiveness.
Weaknesses: I think the main drawback of this paper lies in its presentations:
1, Motivations of some designs are not well explained, i.e., why sub-prototypes benefits the universal scenario?
2, Some technical details seem missing.
The details of these concerns are presented in the ‘Questions’ part.
Minors:
Page 5 Line 179: missing space ''[17]that''
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: 1, Why can sub-prototypes benefit the universal domain adaptation scenario?
I understand that, even within a domain, samples from the same class can be grouped into sub-classes. But, a critical part is missing why this helps the cross-domain association of common classes. which is the core problem for universal domain adaptation. An explanation or empirical justification is needed here, i.e., what is the pattern of retrieved sub-prototypes for common samples and private ones?
2, Some technical details are not comprehensive enough.
1) Is the memory learnable parameters? How to initialize them? This can be basic knowledge for people familiar with this, but it is still necessary to briefly detail this.
2) After reading sec 3.5, it is still unclear to be how the sub-prototypes help align the embeddings \hat{Z}.
3, In Fig. 1 (c), does this method assume the sub-class of two domains can be matched? This seems unrealistic under the distribution shift.
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 2 fair
Contribution: 2 fair
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ##### Q1:
Why can sub-prototypes benefit the universal domain adaptation scenario? I understand that, even within a domain, samples from the same class can be grouped into sub-classes. But, a critical part is missing why this helps the cross-domain association of common classes. which is the core problem for universal domain adaptation. An explanation or empirical justification is needed here, i.e., what is the pattern of retrieved sub-prototypes for common samples and private ones?
##### Re:
As our motivation illustrated in **Figure 1**, the samples that are annotated as one category usually have significant intra-class differences. However, in previous work, they just forced them to align together for adaptation. So, these methods are more likely to classify unknown classes into known classes incorrectly. In the feature space, samples’ features of different sub-classes still have gaps in the feature space, so it is not reasonable to align samples from different sub-classes together not only in human understanding but also in the learned feature space.
For these reasons, if we can have the target domain sample aligned in the sub-class level with the source domain sample, we can avoid the drawback of aligning two samples that are very different together and make the adaption more reasonable.
The pattern of retrieved sub-prototypes is based on comparing the similarity of input-oriented embedding with the memory items. Given a sample of common classes, it will find some similar sub-prototypes from the learned memory to create task-oriented embedding for the downstream classification task. However, for the case of private classes, it will get some redundant memory items that have not been used by the source domain, thus their task-oriented embedding will be far from common classes’ embedding.
***
##### Q2:
Some technical details are not comprehensive enough.
1. Is the memory learnable parameters? How to initialize them? This can be basic knowledge for people familiar with this, but it is still necessary to briefly detail this.
2. After reading sec 3.5, it is still unclear to be how the sub-prototypes help align the embeddings \hat{Z}.
##### Re:
1. Thanks for the advice. The memory structure has learnable parameters and we only use the uniform distribution to initialize memory items. We will add these to the revised manuscript.
2. In our approach, the memory learns sub-prototypes that embody sub-classes learned from the source domain. During testing of the target samples, the encoder produces embedding that is compared to source domain sub-prototypes learned in the memory. Subsequently, an embedding for the query sample is generated through weighted sub-prototype sampling in the memory. This results in reduced domain shifts before the embedding give into the classifier. The Cycle-Consistent Alignment and Adaption is a method that matches the target domain sub-prototype clusters to the source domain and then aligns two similar sub-classes together. Due to space constraints, we do not describe the details presented in the DCC, we will make the clarification clearer in the revised manuscript.
***
##### Q3:
In Fig. 1 (c), does this method assume the sub-class of two domains can be matched? This seems unrealistic under the distribution shift.
##### Re:
In the visualization results of **Figure 3(a)**, we can find that the MemSPM has learned the knowledge of sub-classes. When the MemSPM comes into a new domain, the samples of the target domain can match with the sub-prototypes of each sub-classes. After that, the sampled sub-prototypes of the source domain will be used to represent the target domain input. Therefore, we can reduce the distribution shift between the task-oriented embedding of two domains.
To demonstrate this assumption, we tested visually similar samples from two domains and find that the memory addressing module can mostly sample the same sub-prototypes from the memory. This experiment demonstrates that the sub-classes of two domains can be matched. We will add this part of the analysis to the revised manuscript.
---
Rebuttal Comment 1.1:
Title: Response to the rebuttal
Comment: Thanks for the rebuttal.
The answers convince me and clarify some technical details.
Based on the rebuttal, I decide to raise my score.
---
Reply to Comment 1.1.1:
Comment: Thank you for your feedback and efforts on our work. | Summary: This paper focuses on Universal Domain Adaptation (UniDA), a practical DA setting that does not make any assumptions on the relation between source and target label sets. The goal is to adapt a classifier from source to target domain such that both source and target domains may have their own private classes apart from shared classes. The paper claims that existing UniDA methods overlook the intrinsic structure in the categories, which leads to suboptimal feature learning and adaptation. Hence, they propose memory-assisted sub-prototype mining (MemSPM) that learns sub-prototypes in a memory mechanism to embody the subclasses from the source data. Then, for target samples, weighted sub-prototype sampling is used before passing the embedding to a classifier, which results in reduced domain shift for the embedding. They also propose an adaptive thresholding technique to select relevant sub-prototypes. Finally, they adopt the cycle consistent matching loss objective from DCC [24] along with an auxiliary reconstruction loss for training. They show results on UniDA, Partial DA, and Open-Set DA using standard benchmarks like Office-31, Office-Home, VisDA, and DomainNet.
Strengths: * The motivating ideas for the approach are interesting and intuitive. Further, the technical contributions are novel as well as effective.
* It is intriguing that the auxiliary reconstruction task provides interpretability, which is usually not possible in existing DA solutions.
* The paper is fairly easy to follow (with the exception of some equations and many typos and grammatical errors, see Weaknesses).
* With their method and the advantages of a CLIP-pretrained ViT model, they achieve large improvements over existing ResNet-based methods. While they also show small improvements over some existing methods using the CLIP-pretrained model, this can serve as a new strong baseline for future UniDA work.
Weaknesses: * The paper claims that existing UniDA works overlook the internal intrinsic structure in the categories.
* However, [W1] aims to resolve the same problem. [W1] proposes to learn lower-level visual primitives that are unaffected by the category shift in the higher-level features. And, in their proposed word-prototype-space, different visual primitives can be shared across domains and classes (including unknown classes).
* There is a significant overlap in the motivation given by this paper and that of [W1]. Consequently, the high-level conceptual novelty of this paper is overclaimed. However, I do believe that these conceptual ideas are interesting as well as important for UniDA.
* Please discuss the similarities and differences (both in terms of motivation and the actual approach) of this paper w.r.t. [W1].
* Another paper with similar conceptual ideas is [W2].
* This paper lacks some mathematical rigor.
* Eq. 1, 2: $\hat{Z}=W\cdot M$ is shown as matrix multiplication (I assume that it is not element-wise multiplication since dimensions of $W$ and $M$ are different), but the expansion of this matrix multiplication contains an arg-max over the elements of $W$. Then, it does not make sense for the overall computation to be a standard matrix multiplication.
* Eq. 1, 2: the text mentions that $s_i$ is the index of sub-prototypes in the $i^\text{th}$ item but Eq. 2 implies that $s_i$ is a particular dimension found with arg-max. This seems contradictory and is confusing.
* Eq. 2: Use $\mathop{\arg\max}_{j}$ instead of using `dim=1` since it is a mathematical equation and not the code implementation.
* Eq. 5: It is unclear which dimension is used for top-$k$
* Eq. 6: It should be $\max(... , 0)$ instead of just $\max(...)$.
* The requirement of a CLIP-pretrained backbone is very restrictive since the method cannot be extended to other settings (like medical imaging) where the CLIP-pretraining may be suboptimal. While the paper shows comparisons where prior methods use the CLIP-pretrained model, it should also show comparisons when starting from a random initialization as well as the more widely used ImageNet initialization.
* The paper claims that a CLIP backbone is needed to retrieve sub-prototypes in early iterations. Why not start retrieving sub-prototypes after a few epochs of normal training?
* L135: “eliminates the domain-specific information from the target domain”. This is a very strong claim which does not seem to be backed by evidence. Performing “domain alignment” is not the same as “eliminating” domain-specific information. Further, as we can see from Fig. 3, the sub-prototypes seem to be retaining domain-specific information.
* There are no sensitivity analyses for the several loss-balancing hyperparameters $\lambda_1, \lambda_2, \lambda_3$ (not even in the Supplementary). While the paper claims to have borrowed them from DCC, this approach is vastly different from DCC, and we need to check for sensitivity to these hyperparameters. Further, DCC does not have a reconstruction loss, so it is unclear how that hyperparameter is selected.
* There is no ablation study for the adaptive threshold $\lambda$. It should be compared to various fixed thresholds and the value of the adaptive threshold should also be plotted over the course of training to obtain more insights into its working.
* Other UniDA works, like OVANet [40] and [W1], study the sensitivity of their methods to the degree of openness (i.e. the number of shared/private classes) which changes the difficulty of the UniDA problem. This analysis is missing in this paper. This should be shown for a better understanding of the capabilities of the proposed method.
* Some more related work [W3-W4] on Open-Set DA and UniDA (apart from [W1, W2]) that is not discussed in this paper.
* Minor problems (typos):
* L53: “adaption” → “adaptation”
* L59: “shifts” → “shift”
* L92: use `unknown’ i.e. use a backquote in LaTeX for it to properly render the opened and closed quotes like in L102.
* L119: use math-mode for K in top-$K$.
* L124: “varies” → “vary”
* L126, 179: add space between text and \cite{...}
* L134: “differenciates $\hat{Z}$ with” → “differentiates $\hat{Z}$ from”
* L151: “max” → “maximum”
* L166: “only the $K$” → “only the top-$K$”
* L181: “$max$” → “$\max$”
* L244: “fellow” → “following”
* Minor problems (grammatical errors):
* L32: “aims” → “aiming”
* L40: “Since such kind” → “Since this type”
* L41: “almost happens in all the” → “occurs in almost all of the”
* L59: “embedding give into” → “embedding is passed to”
* L125: “sometimes is” → “is sometimes”
### References
[W1] Kundu et al., “Subsidiary Prototype Alignment for Universal Domain Adaptation”, NeurIPS22
[W2] Liu et al., “PSDC: A Prototype-Based Shared-Dummy Classifier Model for Open-Set Domain Adaptation”, IEEE Transactions on Cybernetics, Dec. 2022
[W3] Chen et al., “Evidential Neighborhood Contrastive Learning for Universal Domain Adaptation”, AAAI22
[W4] Garg et al., “Domain Adaptation under Open Set Label Shift”, NeurIPS22
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: Please see the weaknesses section.
Overall, the technical contributions seem to be novel and intuitive. However, there are significant concerns regarding missing discussions on highly relevant work [W1], lack of mathematical rigor, missing sensitivity analyses and ablation studies, and the restrictiveness of requiring a CLIP-pretrained backbone. Hence, my rating is “4: borderline reject” at this time but I am willing to update my rating based on the rebuttal and discussion.
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: I appreciate that the paper provides both limitations and broader societal impact discussions in the Supplementary.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ##### Re_Q1:
Although the concept of the prototype is mentioned in [W1] and [W2], there are clear differences between theirs and our MemSPM.
First, the meaning of prototype is different between [W1] and ours. In the [W1], the subsidiary prototype is extracted from randomly cropped images, which means the subsidiary prototypes only represent the low-level, morphological, and partial features of the image. These subsidiary prototypes don’t have complete semantic knowledge, and the method can’t learn the concept shift in the same category. Moreover, they still used the labeled category directly for alignment and adaptation. These prototypes can’t represent some part of the samples in one category.
In contrast, our MemSPM allows memory items to extract complete semantic knowledge and maintain domain-invariant knowledge. To accomplish this, we use input-oriented embedding, which involves comparing the entire image feature with memory items. The memory can then sample a task-oriented embedding that represents the semantic knowledge of the input-oriented embedding. Our approach is designed to obtain a domain-invariant and semantic feature for categories with significant domain shifts. As a result, each sub-prototype can represent a sub-class in one category.
Second, the purpose of [W2] is very different from our MemSPM. They aim to learn differences among unknown classes, which is like the DCC. It still extracts features and aligns the class across different domains directly based on one-hot labels, and doesn’t concern with the concept shift and difference in one category. However, our method can mine the sub-classes in one category when there exist significant concept shifts, reflecting the inherent differences among samples annotated as the same category. This helps universal adaptation with a more fine-grained alignment.
***
##### Re_Q2:
Thanks for the advice. We have revised these errors for clearer clarification.
**eq1** We apply tensor operation using the Einstein summation notation on $W$ and $M$: $' nd, ckd->nkc'$. The memory shape is $[C * K * D]$ and n is batch size.
**eq5**
The $i$ dimension of $w$ is used for top-k.
***
##### Re_Q3:
We have conducted experiments that adopt the backbone with ImageNet initialization. The performance of MemSPM on Officehome using **ViT-B/16(ImageNet)** is **76.7%** (H-score), which is **7.5%** lower than MemSPM using **ViT-B/16(CLIP)**. Thus, adopting a better pre-trained encoder will result in better performance. We have tried the approach of retrieving sub-prototypes after a few epochs of regular training, but it cannot resolve the problem because it only reaches **64.3%** and the loss does not decrease in the following epochs.
| Method | Backbone Pretrain | Ar2Cl | Ar2Pr | Ar2Rw | Cl2Ar | Cl2Pr | Cl2Rw | Pr2Ar | Pr2Cl | Pr2Rw | Rw2Ar | Rw2Cl | Rw2Pr | Avg |
|:-------:|:-------:|:-------:|:-------:|:-------:|:-------:|:-------:|:-------:|:-------:|:-------:|:-------:|:-------:|:-------:|:-------:|:------:|
| DCC+MemSPM |ViT-B/16 ImageNet | 57.1 | 85.0 | 88.4 | 60.8 | 61.1 | 85.2 | **83.5** | **76.1** | 87.5 | **82.7** | **77.3** | 76.4 | 76.7 |
| DCC+MemSPM |ViT-B/16 CLIP | **78.1** | **90.3** | **90.7** | **81.9** | **90.5** | **88.3** | 79.2 | 77.4 | **87.8** | 78.8 | 76.2 | **91.6** | **84.2** |
| DCC+MemSPM Without Lcdd |ViT-B/16 CLIP| 75.9 | 75.4 | 86.4 | 80.1 | 71.6 | 87.5 | 70.1 | 87.1 | 88.7 | 74.2 | 73.5 | 88.8 | 79.8|
| Fixed Threshold=0.005 DCC+MemSPM |ViT-B/16 CLIP | 64.6 | 86.7 | 87.4 | 63.3 | 68.5 | 79.3 | 65.9 | 65.8 | 81.4 | 70.7 | 68.8 | 85.5 | 73.9 |
***
##### Re_Q4:
Thanks for your advice. We have modified the claim in this section. What we mean is that the sub-prototypes are all learned from the source domain, and the target input will be represented by these sub-prototypes. Therefore, we find that the task-oriented information retrieved from memory will mainly have features from the source domain. After that, the classifier can accurately classify, similar to how it does in the source domain. In **Figure 3**, the visualization seems to be from one domain, because our memory only has the source domain feature and the decoder was trained on it.
***
##### Re_Q5:
Thanks for the advice. We have conducted experiments on the contribution of each loss. The L_ce is for the classification that can not be removed. The L_cdd is removed and the model achieves **79.8\%**, which is **4.4\%** lower than using the full loss function. Through experiment, we find that the coefficient of L_ce **($\lambda_{1} = 0.1$)** and the coefficient of L_cdd **($\lambda_{2} = 3$)** achieves the best performance. The reconstruction task loss (L_rec) has a slight improvement on the performance but is mainly designed for better visualizing and understanding the learned sub-prototypes. We will add these results to the revised manuscript.
***
##### Re_Q6:
We find a best-performed fixed threshold of **0.005** through experiments. It limits the memory to learn sub-prototypes, which only achieved **73.9%** (H-score) on Officehome. Moreover, using the fixed threshold will add another hyperparameter to the MemSPM, which must be adjusted to different settings. We will add this to the revised manuscript.
***
##### Re_Q7: Openness of setting:
The setting of different openness was listed in **Table 1**, and the results were listed in **Tables 2, 3, 4, and 5**. We also have done an additional setting on Officehome.
| Unknown class | Avg |
|:-------:|:-------:|
| 5 | 82.3 |
| 10 | 81.7 |
| 50 | 84.2 |
***
##### Re_Q8: Related works:
Thank you for your advice. We will add these works to the revised manuscript.
***
##### Minor problems (typos and grammatical errors):
Thank you for your advice. We have revised these problems.
---
Rebuttal Comment 1.1:
Title: Response to Rebuttal
Comment: I thank the authors for their detailed response and appreciate their efforts in the process.
Most of my concerns have been addressed and I have increased my rating.
However, I advise the authors to also include the DCC performance with CLIP initialization (which is 74.4% as per the main paper) along with the performance of DCC+MemSPM with ImageNet initialization (which is 76.7% as per the rebuttal). This actually strengthens the rebuttal because it shows that MemSPM gives an improvement over DCC+CLIP even with ImageNet initialization. For future drafts, please add the performance of DCC with ImageNet initialization for ViT-B/16 model.
---
Reply to Comment 1.1.1:
Comment: Thank you for providing valuable feedback on our work. We have added the performance of DCC with CLIP initialization along with the performance of DCC+MemSPM with ImageNet initialization. We conduct experiments on DCC with ImageNet initialization for the ViT-B/16 model and will revise it in the final version. | Summary: This paper proposes a Memory-Assisted Sub-Prototype Mining (MemSPM) method that can learn the differences between samples belonging to the same category and mine sub-classes when there exists significant concept shift between them.
Strengths: The writing of the article is very good. Graphical expressions such as t-SNE are very clear. The method have achieved relatively high classification H-score.
Weaknesses: Some training details need to be explained, such as the selection of hyperparameters. How to adjust the N, S and lambda, and what criteria are based on? If it is based on the final experimental effect, it also indirectly depends on the label information of the target domain.
The scalability of the method is relatively poor. If the data set is large and there are many categories, will there be many prototypes required, and how will the method perform? It is crucial to have the Domainnet dataset in the experiments.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: mainly of the weaknesses.
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: This paper has no limitation sections.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ##### Q1:
Some training details need to be explained, such as the selection of hyperparameters. How to adjust the N, S, and lambda, and what criteria are based on? If it is based on the final experimental effect, it also indirectly depends on the label information of the target domain.
##### Re_Q1:
We carefully conducted ablation experiments on these hyperparameters, which can be seen in **Figure 3(b)**. We find that MemSPM is insensitive to N, while larger S values result in better performance. As we are mainly concerned about the hyperparameters of memory structure, we find that we only need to set large S and N, and this setting depends solely on the GPU memory. Moreover, we consistently applied the **$\lambda_{1} = 0.1$** and the **$\lambda_{2} = 3$** on all datasets. Therefore, setting the hyperparameters does not require any information about the target domain and we can use the fixed hyperparameters in all datasets.
***
##### Q2:
The scalability of the method is relatively poor. If the data set is large and there are many categories, will there be many prototypes required, and how will the method perform? It is crucial to have the Domainnet dataset in the experiments.
##### Re_Q2:
We do have the results for **Domainnet**, VisDA, Office-31, and Office-Home presented in **Tables 2** and **3**. Domainnet, VisDA, and Officehome all require many prototypes to represent sub-classes, and MemSPM has demonstrated state-of-the-art performance on all these benchmarks. As mentioned in **Re_Q1**, the MemSPM only needs to set large values for N and S, which are solely related to GPU memory, when dealing with a large number of categories. | Rebuttal 1:
Rebuttal: We thank all the reviewers for their insightful feedback and list some responses to common questions in this section.
***
##### Q1: Impact of CLIP:
CLIP-based embedding does have some cross domains knowledge but it was still influenced by the larger domain gap. The simple baseline of CLIP has been tested in officehome only achieving **77.2\%** (H-score), which still is **7.0\%** lower than our MemSPM.
We also have conducted experiments to compare ViT-B/16 (pre-trained by CLIP), ViT-B/16 (pre-trained on ImageNet), and ViT-B/16 (Without pre-trained). The performance of MemSPM in officehome using ViT-B/16 (ImageNet) is **76.7\%** (H-score) which is **7.5\%** lower than MemSPM using ViT-B/16 (pre-trained on CLIP). Additionally, the ViT-B/16 (Without pre-trained) only achieves **64.3\%**, which is **19.9\%** lower than that using ViT-B/16 (pre-trained on CLIP). These experiments demonstrate that a better pre-trained encoder can benefit sub-prototype mining.
| Method | Backbone Pretrain | Ar2Cl | Ar2Pr | Ar2Rw | Cl2Ar | Cl2Pr | Cl2Rw | Pr2Ar | Pr2Cl | Pr2Rw | Rw2Ar | Rw2Cl | Rw2Pr | Avg |
|:-------:|:-------:|:-------:|:-------:|:-------:|:-------:|:-------:|:-------:|:-------:|:-------:|:-------:|:-------:|:-------:|:-------:|:------:|
| CLIP-Baseline |ViT-B/16 CLIP| 64.6 | 84.3 | 78.1 | 73.7 | 88.2 | 86.5 | 68.1 | 68.7 | **89.6** | 68.5 | 69.4 | 86.6 | 77.2 |
| DCC+MemSPM |ViT-B/16 None | 50.7 | 78.4 | 85.6 | 50.2 | 60.7 | 67.1 | 58.2 | 44.1 | 77.9 | 67.1 | 50.3 | 81.7 | 64.3 |
| DCC+MemSPM |ViT-B/16 ImageNet | 57.1 | 85.0 | 88.4 | 60.8 | 61.1 | 85.2 | **83.5** | **76.1** | 87.5 | **82.7** | **77.3** | 76.4 | 76.7 |
| DCC+MemSPM |ViT-B/16 CLIP | **78.1** | **90.3** | **90.7** | **81.9** | **90.5** | **88.3** | 79.2 | 77.4 | 87.8 | 78.8 | 76.2 | **91.6** | **84.2** |
***
##### Q2: Hyperparameters and Loss Function:
We conducted ablation experiments on hyperparameters, the results of which can be seen in **Figure 3(b)**. We found that MemSPM was insensitive to N, while larger S values resulted in better performance. For these experiments, we discovered that we only need to set a large S and N, and this setting depends solely on the GPU memory.
We have conducted experiments on the contribution of each loss. The L_ce is for the classification that can not be removed. The L_cdd is removed and the model achieves **79.8%**, which is **4.4%** lower than using the full loss function. After the experiment, we found that the coefficient of L_ce **($\lambda_{1} = 0.1$)** and the coefficient of L_cdd **($\lambda_{2} = 3$)** which were provided by DCC had the best performance.
For reconstruction task loss (L_rec), it wasn’t a main part of our loss. And in our experiment, we found that L_rec did a very small impact on the performance, which is just for visualization. As we are mainly concerned with hyperparameters of memory structure, these ablation results of loss had no space to list in. For hyperparameters of the loss function, we fixed them for all datasets. Therefore, setting the hyperparameters does not require hyperparameters turning and any information about the target domain. We will add more details in the revised manuscript.
| Method | Backbone Pretrain | Ar2Cl | Ar2Pr | Ar2Rw | Cl2Ar | Cl2Pr | Cl2Rw | Pr2Ar | Pr2Cl | Pr2Rw | Rw2Ar | Rw2Cl | Rw2Pr | Avg |
|:-------:|:-------:|:-------:|:-------:|:-------:|:-------:|:-------:|:-------:|:-------:|:-------:|:-------:|:-------:|:-------:|:-------:|:------:|
| DCC+MemSPM Without Lcdd |ViT-B/16 CLIP| 75.9 | 75.4 | 86.4 | 80.1 | 71.6 | 87.5 | 70.1 | **87.1** | **88.7** | 74.2 | **88.8** | 73.5 | 79.8|
| DCC+MemSPM |ViT-B/16 CLIP | **78.1** | **90.3** | **90.7** | **81.9** | **90.5** | **88.3** | **79.2** | 77.4 | 87.8 | **78.8** | 76.2 | **91.6** | **84.2** |
***
##### Q3: Effectiveness of Adaptive Threshold
We find a best-performed fixed threshold of **0.005** through experiments. It limits the memory to learn sub-prototypes, which only achieved **73.9%** (H-score) on Officehome. Moreover, using the fixed threshold will add another hyperparameter to the MemSPM, which must be adjusted to different settings. We will add this to the revised manuscript.
| Method | Backbone Pretrain | Ar2Cl | Ar2Pr | Ar2Rw | Cl2Ar | Cl2Pr | Cl2Rw | Pr2Ar | Pr2Cl | Pr2Rw | Rw2Ar | Rw2Cl | Rw2Pr | Avg |
|:-------:|:-------:|:-------:|:-------:|:-------:|:-------:|:-------:|:-------:|:-------:|:-------:|:-------:|:-------:|:-------:|:-------:|:------:|
| DCC+MemSPM |ViT-B/16 CLIP | **78.1** | **90.3** | **90.7** | **81.9** | **90.5** | **88.3** | **79.2** | **77.4** | **87.8** | **78.8** | **76.2** | **91.6** | **84.2** |
| Fixed Threshold=0.005 DCC+MemSPM |ViT-B/16 CLIP | 64.6 | 86.7 | 87.4 | 63.3 | 68.5 | 79.3 | 65.9 | 65.8 | 81.4 | 70.7 | 68.8 | 85.5 | 73.9 | | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: This work addresses the problem of universal domain adaptation by focusing on the intra-class structure within categories, which is often overlooked by existing methods.
The main contribution is the proposed Memory-Assisted Sub-Prototype Mining (MemSPM) method, which learns the differences between samples belonging to the same category and mines sub-classes in the presence of significant concept shift. By doing so, the model achieves a more reasonable feature space that enhances transferability and reflects inherent differences among samples.
Experimental evaluation demonstrates the effectiveness of MemSPM in various scenarios, achieving state-of-the-art performance on four benchmarks in most cases.
Strengths: S1 : The primary contribution of this work is the introduction of sub-prototypes, learned from samples within the same category but exhibiting significant concept shift. The utilization of sub-prototypes allows for a more fine-grained adaptation process, which is an intuitive and an interesting idea. The ablation experiment Figure 3 (graph), supports the notion that mining sub-prototypes is indeed advantageous, as increasing the number of sub-prototypes (S) leads to a substantial performance improvement, from approximately 62% (with one sub-prototype per category) to around 80% (with 40 sub-prototypes per category).
S2: The results presented in Table 2 and Table 3 demonstrate significant performance improvements compared to previous works, with increases of +4.5% and +6.4% in H-score on DomainNet and Office-31 datasets for UniDA scenario. Additionally, there is a +1.6% improvement in H-score on the Office-Home dataset. It should be noted that the comparisons are not entirely apples-to-apples, as discussed in the weaknesses section.
Weaknesses: W1: The utilization of CLIP-based embedding as mentioned in line 126 offers semantic capabilities that generalize across various domains (as shown by works such as [1, 2, ..] that build on top of CLIP). However, the importance of using CLIP-based embedding is not clearly demonstrated in the ablation analysis. A comparison between CLIP-based embedding, learned embedding (without pre-training), and ViT-B/16 (pre-trained on ImageNet) would provide valuable insights. Additionally, the lack of utilization of CLIP's semantic capabilities in prior works raises concerns about the apples-to-apples comparison of the results presented in Table 2 and Table 3.
W2: From the experiment section, the impact of different losses, such as cross-entropy (L_ce), domain alignment loss (L_cdd), and auxiliary reconstruction task (L_rec), on model performance is not clearly explained in the experiment section. Understanding the contribution of each loss would enhance the understanding of the paper.
W3: The sensitivity of hyperparameters across different scenarios, such as Open-Set Domain Adaptation (OSDA) and UniDA, is not adequately addressed in this section. Investigating the sensitivity of hyperparameters would provide valuable insights into their impact on model performance.
W4: Section 3.3.3 discusses the "Adaptive Threshold Technique for More Efficient Memory," but there is a lack of experimental details showcasing the memory efficiency of this technique. Without such evidence, it becomes challenging to fully appreciate the technical contribution.
W5: While the motivation and the main idea of mining sub-prototypes are novel, it is worth noting that memory-based prototype mining was explored earlier in works like [3]. This observation slightly diminishes the overall technical contribution..
W6: Supplementary material Figure 1 reveals that a significant portion (>60%) of the sub-prototype visualizations are not interpretable. This undermines the contribution of interpretability in this work.
[1] Rinon Gal and Or Patashnik and Haggai Maron and Gal Chechik and Daniel Cohen-Or StyleGAN-NADA: CLIP-Guided Domain Adaptation of Image Generators, ACM Transactions on Graphics
[2] Boyi Li, Kilian Q. Weinberger, Serge Belongie, Vladlen Koltun, René Ranftl, Language-driven Semantic Segmentation, ICLR 2022
[3]Tarun Kalluri , Astuti Sharma, Manmohan Chandraker.\ MemSAC: Memory Augmented Sample Consistency for Large Scale Domain Adaptation, ECCV 2022
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: Please refer the weaknesses section for the related questions that need more clarification.
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 2 fair
Presentation: 3 good
Contribution: 2 fair
Limitations: A notable limitation of the study is the lack of clarity regarding the contribution of various components of the proposed method to the overall performance. Specifically, the impact of CLIP-based embedding, which has demonstrated generalizable capabilities even in zero-shot scenarios across domains, needs to be thoroughly understood to fully appreciate the proposed components. Gaining insights into the individual contributions of different components would provide a deeper understanding of their influence on the overall performance. Further investigations or additional analyses focusing on these aspects would enhance the comprehensiveness and rigor of the study.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for supporting our work.
##### Re_W1:
Although CLIP-based embedding does have some cross-domain knowledge, it still cannot address the large domain gap that existed in the benchmarks. The baseline that simply adopts the CLIP encoder has been tested on the Officehome dataset only achieving **77.2\%** (H-score), which is **7.0\%** lower than our MemSPM.
As you suggested, we have conducted experiments to compare ViT-B/16 (pre-trained by CLIP), ViT-B/16 (pre-trained on ImageNet), and ViT-B/16 (without pre-training). The performance of MemSPM on Officehome using ViT-B/16 (ImageNet) is **76.7\%** (H-score), which is **7.5\%** lower than MemSPM using ViT-B/16 (pre-trained on CLIP). Additionally, the ViT-B/16 (without pre-training) only achieves **64.3\%**, which is **19.9\%** lower than that using ViT-B/16 (pre-trained on CLIP). These experiments demonstrate that a better pre-trained encoder can benefit sub-prototype mining.
In **Table 2** and **Table 3**, all the methods of GLC, DCC, and MemSPM adopt the ViT-B/16 backbone (pre-trained by CLIP), so the comparison is fair. The GLC is the SOTA method of UniDA and DCC is the method we based on.
| Method | Backbone Pretrain | Ar2Cl | Ar2Pr | Ar2Rw | Cl2Ar | Cl2Pr | Cl2Rw | Pr2Ar | Pr2Cl | Pr2Rw | Rw2Ar | Rw2Cl | Rw2Pr | Avg |
|:-------:|:-------:|:-------:|:-------:|:-------:|:-------:|:-------:|:-------:|:-------:|:-------:|:-------:|:-------:|:-------:|:-------:|:------:|
| DCC+MemSPM |ViT-B/16 None | 50.7 | 78.4 | 85.6 | 50.2 | 60.7 | 67.1 | 58.2 | 44.1 | 77.9 | 67.1 | 50.3 | 81.7 | 64.3 |
| DCC+MemSPM |ViT-B/16 ImageNet | 57.1 | 85.0 | 88.4 | 60.8 | 61.1 | 85.2 | **83.5** | **76.1** | 87.5 | **82.7** | **77.3** | 76.4 | 76.7 |
| DCC+MemSPM |ViT-B/16 CLIP | **78.1** | **90.3** | **90.7** | **81.9** | **90.5** | **88.3** | 79.2 | 77.4 | **87.8** | 78.8 | 76.2 | **91.6** | **84.2** |
| DCC+MemSPM Without Lcdd |ViT-B/16 CLIP| 75.9 | 75.4 | 86.4 | 80.1 | 71.6 | 87.5 | 70.1 | 87.1 | 88.7 | 74.2 | 73.5 | 88.8 | 79.8|
| Fixed Threshold=0.005 DCC+MemSPM |ViT-B/16 CLIP | 64.6 | 86.7 | 87.4 | 63.3 | 68.5 | 79.3 | 65.9 | 65.8 | 81.4 | 70.7 | 68.8 | 85.5 | 73.9 |
***
##### Re_W2:
Thanks for the advice. We have conducted experiments on the contribution of each loss. The L_ce is used for the classification that cannot be removed. The L_cdd is removed and the model achieves **79.8\%**, which is **4.4\%** lower than using the full loss function. In our early experiment, we observe that the coefficient of L_ce **($\lambda_{1} = 0.1$)** and the coefficient of L_cdd **($\lambda_{2} = 3$)** achieve the best performance.
The reconstruction task loss (L_rec) has a slight performance improvement but is mainly designed for better visualizing and understanding the learned sub-prototypes. Since we focused on studying the hyperparameters of the proposed memory structure, the ablation results of loss functions have not been presented in the paper due to the space limit. We will add them to the revised manuscript.
***
##### Re_W3:
We examined the impact of these hyperparameters on MemSPM and the results are illustrated in **Figure 3(b)**. In different scenarios, we used the same hyperparameters of memory structure. MemSPM reaches the SOTA results on OSDA and UniDA in **Tables 2 and 3**, and comparable results on PDA in **Table 4**, which means the hyperparameters of MemSPM are not sensitive to different scenarios.
***
##### Re_W4:
Thank you for your advice. Our memory structure is randomly initialized. We find a best-performed fixed threshold of **0.005** through experiments. It limits the memory to learn sub-prototypes, which only achieved **73.9\%** (H-score) on Officehome. Moreover, using the fixed threshold will add another hyperparameter to the MemSPM, which must be adjusted to different settings. We will add more details to the revised manuscript.
***
##### Re_W5:
There are several major differences between MemSAC [1] and our proposed MemSPM method.
First, MemSAC employs a non-parametric memory bank that directly stores the features extracted by the encoder. This approach leaves too much domain-specific knowledge in the memory feature space. In contrast, our method employs a learnable memory that samples memory items based on input-oriented embedding. This creates a task-oriented embedding with less domain-specific knowledge.
Second, the memory structure used in MemSAC can be traced back to the work published in [2]. So, we view this structure as analogous to widely-used structures such as Transformer, and ResNet. Our use of the learnable memory for task-oriented feature mining, as well as our improvement in the memory structure for sub-prototype mining, represents a significant departure from previous work.
***
##### Re_W6:
In our supplementary material, we show the memory visualization results without dropping imperfect ones. For the robustness and scalability of the MemSPM, we set large N and S that can fit all datasets. We also conducted ablation experiments on N and S, which can be seen in ****Figure 3(b)****. We find that MemSPM is insensitive to N, while a larger S value results in better performance. So, it is a fact that our learned memory bank contains redundant items to some extent, meaning the number of memory items is more than necessary for the sub-prototypes. As a result, some items do not look that good semantically. It doesn't affect the interpretability of our method.
***
[1] Tarun Kalluri, et al. MemSAC: Memory Augmented Sample Consistency for Large Scale Domain Adaptation. ECCV 2022
[2] Sukhbaatar, Sainbayar, Jason Weston, and Rob Fergus. End-to-end memory networks. NeurIPS 2015.
---
Rebuttal Comment 1.1:
Title: Response to Rebuttal
Comment: I thank the authors for their comprehensive response and value their effort in the process.
They have adequately addressed the majority of my concerns, leading to an adjustment in my rating. | null | null | null | null | null | null |
Look Beneath the Surface: Exploiting Fundamental Symmetry for Sample-Efficient Offline RL | Accept (poster) | Summary: This paper proposes a physics-informed dynamics model TDM and a new offline RL algorithm TSRL, which exploit the fundamental symmetries in the system dynamics for sample-efficient offline policy learning, embedding and enforcing T-symmetry between a pair of latent forward, and reversing ODE dynamics to learn fundamental dynamics patterns in data. Empirical results on D4RL benchmark datasets validate the good generalization ability of TSRL.
Strengths: - The idea is interesting and makes intuitive sense.
- The paper is overall well-written.
- Authors do comprehensive experiments on D4RL tasks to evaluate the generalization ability of the new method.
Weaknesses: - The performance of TSRL is not comparable with other baselines in most Adroit human and cloned tasks.
- No comparison of other offline reinforcement learning methods using data augmentation.
- No experiments to validate TSRL alleviates the problem of over-conservatism.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: - Explain the reason that the performance of TSRL is not comparable with other baselines in most Adroit human and cloned tasks.
- Add comparisons of other offline reinforcement learning methods using data augmentation.
- Add related work of the data augmentation methods using offline RL in Sec. 6.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 2 fair
Limitations: The limitations of the paper are properly addressed by the authors, but the societal impacts of the paper are not discussed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > **W1 & Q1. The performance of TSRL is not comparable with other baselines in most Adroit human and cloned tasks.**
- The full results for Adroit tasks were listed in Appendix C Table 4 due to the space limit of the main article, please check our supplementary material for details. we observe that TSRL achieves much better performance as compared to the baseline algorithms in the pen tasks (both the full datasets and the reduced-size datasets), and comparable performance in other tasks. Note that Adroit tasks are substantially more challenging as compared to MuJoCo tasks due to their high-dimensionality and potentially non-Markovian (human dataset) property, most offline RL algorithms struggle in these tasks.
- In Appendix C, we also provide the comparative performance of TSRL on Antmaze-umaze tasks with full and 10k reduced-size datasets. We find TSRL also achieves substantially better performance under small datasets of these tasks.
> **W2 & Q2. No comparison of other offline reinforcement learning methods using data augmentation.**
- In Fig. 5 and discussion in Section 5 L287-293, we have provided the comparison of TSRL with S4RL[1] (reference [42] in the paper) with Gaussian noises on two MuJoCo tasks.
- To fully address the concern of the reviewer, we have conducted additional experiments on the reduced-size 10k MuJoCo datasets to compare TSRL and other offline RL methods using data augmentation. In particular, we compared with model-free method S4RL[1] with Gaussian (S4RL-N) and uniform (S4RL-U) noises, as well as a recent model-based data augmentation method CABI[2] (Reference [43] in the paper). CABI employs a pair of predictive dynamics models to assess the reliability of the augmented data. The KFC[3] (reference [32] in the paper) is also a model-based data augmentation method, which utilizes Koopman theory to model the dynamical system and augments data points in the latent linearizable space. However, KFC has no publicly available code, and re-implementing this algorithm is quite challenging, given the limited rebuttal period, we opt to evaluate CABI with our method. The detailed results are reported in the following table.
**Table 1: Performance of data augmentation methods with 10k reduced-size D4RL datasets. Each result is generated by 3 random seeds.**
| **D4RL tasks**| **CABI** |**S4RL-N**| **S4RL-U**| **TSRL**|
| -------|----|----|----|----|
|Hopper-m|48.3 $\pm$ 3.9| 28.3 $\pm$ 6.2| 23.6 $\pm$ 4.7| **62.0 $\pm$ 3.7**
|Hopper-mr|19.8 $\pm$ 3.9|16.6 $\pm$ 12.9|12.5 $\pm$ 12.2|**21.8 $\pm$ 8.2**
|Hopper-me|38.3 $\pm$ 5.3|12.5 $\pm$ 3.8| 13.1 $\pm$ 4.7|**50.9 $\pm$ 8.6**
|Hopper-e|34.6 $\pm$ 24.4|14.1 $\pm$ 12.9|12.2 $\pm$ 11.6|**82.7 $\pm$ 21.9**
|Halfcheetah-m| 34.8 $\pm$ 1.9|25.1 $\pm$ 6.8|23.2 $\pm$ 7.1|**38.4 $\pm$ 3.1**
|Halfcheetah-mr| 23.5 $\pm$ 3.4|15.1 $\pm$ 9.3|14.8 $\pm$ 9.5|**28.1 $\pm$ 3.5**
|Halfcheetah-me| 29.9 $\pm$ 1.7|27.1 $\pm$ 7.1|23.4 $\pm$ 8.2|**39.9 $\pm$ 21.1**
|Halfcheetah-e| 4.2 $\pm$ 4.1|2.4 $\pm$ 3.9|1.8 $\pm$ 3.1|**40.6 $\pm$ 24.4**
|Walker2d-m| 42.4 $\pm$ 23.3|24.5 $\pm$ 4.3|21.9 $\pm$ 4.8|**49.7 $\pm$ 10.6**
|Walker2d-mr|11.7 $\pm$ 7.6|1.5 $\pm$ 2.1|1.4 $\pm$ 2.3|**26.0 $\pm$ 11.3**
|Walker2d-me|17.4 $\pm$ 9.2|21.9 $\pm$ 16.4|16.0 $\pm$ 13.2|**46.7 $\pm$ 17.4**
|Walker2d-e|20.2 $\pm$ 3.4|56.5 $\pm$ 26.7| 51.1 $\pm$ 29.7|**102.2 $\pm$ 11.3**
- The results clearly show that TSRL outperforms all offline RL baselines with data augmentation under small datasets. It is also observed that model-based methods TSRL and CABI generally perform better than model-free data augmentation method S4RL in this setting, due to access to additional dynamics information. Moreover, as CABI does not learn a strongly regularized dynamics model with T-symmetry consistency as in our proposed TDM, it still has a noticeable performance gap as compared to our method.
> **W3. No experiments to validate TSRL alleviates the problem of over-conservatism.**
- We have conducted experiments to evaluate the OOD generalization performance of TSRL, which is direct evidence to show TSRL can alleviate the problem of over-conservatism. If the reviewer check Fig. 7, L307-319 in Section 5 as well as Fig. 8, L592-607 in Appendix C, we constructed two low-speed datasets from D4RL-Walker2d datasets by removing all high x-velocity samples. This creates smaller datasets with a large proportion of state-action space and transition dynamics unobserved. As the rewards in these tasks encourage high-speed behavior, we want to test if the agent can still generalize and learn well given only these low-speed data.
- As shown in Fig.7 and 8, the existing offline RL algorithms perform poorly when trained with only low-speed data, which is primarily due to the adoption of over-conservative data-related regularizations, making these algorithms not able to deviate largely from the data distribution. By contrast, TSRL is still able to achieve good performance, due to the access to more fundamental dynamics information that remains invariant in both low- and high-speed data. This is evident if the reviewer inspects the policy rollout distributions (right figures of Fig. 7 and 8) that the policies learned by TSRL indeed generalize to novel high-speed behaviors that are not present in the training data.
> **Q3. Add related work of the data augmentation methods using offline RL in Sec. 6.**
- We thank the reviewer's suggestion. We will add more detailed disussion on related work and include the additional experiment results in our final paper.
## References
[1] Sinha, S., Mandlekar, A., & Garg, A. S4rl: Surprisingly simple self-supervision for offline reinforcement learning in robotics. CoRL 2022.
[2] Lyu, J., Li, X., & Lu, Z. Double Check Your State Before Trusting It: Confidence-Aware Bidirectional Offline Model-Based Imagination. NeurIPS 2022.
[3] Weissenbacher, M., et al., Koopman q learning: Offline reinforcement learning via symmetries of dynamics. ICML 2022.
---
Rebuttal Comment 1.1:
Title: Reviewer, please respond to the rebuttal
Comment: Reviewer, please respond to the rebuttal. | Summary: The current offline RL algorithm requires a large amount of offline dataset training and has poor performance on small datasets. This article proposes a framework to address this issue. By learning a T-symmetry enhanced dynamic model, capture more fundamental dynamic relationships. Afterward, the article applies T-symmetry to offline RL, uses a model to regularize constraints in the latent action space, and uses a model to filter data for data augmentation. The experimental results showed improvement in small datasets.
Strengths: 1. The key idea is novelty. This paper explores symmetries to enhance the performance of offline RL with small datasets. The proposed method has a new technical insight.
2. The article is clearly structured.
3. The proposed method improves performance on offline RL with small datasets.
Weaknesses: 1. This article chooses the backward model because of irreversible action, but this does not quite meet the definition of time-reversal symmetry. Other articles (https://arxiv.org/abs/2111.12600) respectively choose the inverse model to get the reversible action, and then process the irreversible action, rather than simply giving up.
2. The proposed method has a great improvement on small datasets, but there is still a lot of gap from the performance of complete data.
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: 1. Compared with other methods, since t-symmetry can learn better representation and dynamics, why is there a great improvement in small datasets, but not in full datasets?
2. Why choose to perturb the latent space and then filter the data, rather than perturbing directly on the state space, and then filtering? Is this part of the performance improvement caused by the latent space, the filtered data, or both?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 3 good
Contribution: 2 fair
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > **W1. Difference with the origianl definition of time-reversal symmetry. Comparison with the treatment on irreversible actions in https://arxiv.org/abs/2111.12600.**
We thank the reviewer for providing this reference and will add it to our final paper. Regarding the differences:
- As we have discussed in the last subsection in Section 2 (L113-129), the original T-symmetry (i.e., $F(s,a)=\dot{s}=-\tilde{G}(s',a')$) can sometimes be broken by irreversible action $a'$. Hence we consider an extended form, that is, enforcing $F(s,a)=\dot{s}=-G(s',a)$ while preserving the first-order ODE requirement in T-symmetry. Note that this condition is almost universally held in typical MDP, as it essentially suggests the distribution $P(s,a,s')$ and $P(s',a,s)$ should be equal. It also has the advantage of being irrelevant to the impact of irreversible actions $a'$. Furthermore, using the reverse dynamics model (e.g., function mapping with $(s',a)\rightarrow s$) has already been adopted in many RL studies [1,2,3], our method differs from these works in that we use it to construct T-symmetry consistency and models it as an ODE system.
- The CCWM (Cycle-Consistency World Model) mentioned by the reviewer actually adopts a more engineering-oriented approach to address the issue of irreversible actions. It removes "irreversible" transitions that produce sudden changes in Q-values over a trajectory in the latent space from the modeling process. This actually ignored some dynamics properties in an MDP associated with such irreversible actions. While in our work, such "irreversible" transitions will not cause a problem and can be implicitly captured in our modeling process.
> **W2 & Q1. The proposed method has a great improvement on small datasets, but there is still a lot of gap from the performance of complete data.**
- As we have discussed in the conclusion, there is a trade-off between model generalizability and expressiveness. In TDM, we introduce multiple regularizers as well as the additional T-symmetry regularization to obtain a well-behaved dynamics model for the small-dataset setting. But this could hurt model expressiveness, as the model tries to extract the fundamental representations/patterns within the dataset, rather than fits each individual sample. This is beneficial for improving robustness and generalization in the low-data regime but can be overly restrictive for large datasets.
- In offline RL, learning from very small datasets and large datasets faces different challenges and potentially requires different algorithm design logics. For large datasets with high state-action space coverage, maximally exploiting the offline dataset and ensuring policy learning within data distribution to avoid distributional shift is already sufficient, which is exactly how most existing offline RL algorithms are designed. While under small datasets, strictly regularizing policy within data distribution will hurt performance, encouraging OOD generalization is the key to achieving higher performance. The data scarcity in the small-sample setting makes it more challenging to learn a reliable policy, thus requiring robust regularization techniques to prevent overfitting and promote generalization performance.
- Finally, it should be noted that although our proposed TSRL is designed for the small-dataset setting, as shown in Table 1 and Fig. 3, it can still achieve comparable (sometimes even better) performance with existing advanced offline RL methods in the complete datasets, despite the fact that D4RL provides overly large amounts of data for simple locomotion tasks.
> **Q2. Why choose to perturb the latent space and then filter the data, rather than perturbing directly on the state space, and then filtering?**
- First, in our TDM, the T-symmetry consistency is enforced on the latent forward and reverse dynamics rather than on the original state-action space. Thus it has to evaluate T-symmetry violations in the latent space.
- The forward and reverse dynamics are learned in latent space because they need to be first-order ODE systems to establish the T-symmetry relationship. For a general nonlinear dynamical system, directly fitting a first-order ODE system can suffer from relatively large errors. Hence the current best practices in the control community (such as Koopman theory [4,5] and Sindy [6,7]) first map the original state-action space into a well-behaved latent space and then construct latent first-order ODE dynamics by fitting the data.
- Lastly, perturbing in the latent space for data augmentation is much more convenient for implementation. As in our TSRL, the inputs of the Q-function are the latent states and actions ($(z_s, z_a)=\phi(s,a)$) from the TDM encoder $\phi$. And the augmented data is executed in the latent space while training the Q function. Perturbing directly on the original state space will incur another origin-to-latent conversion step and cause extra computation.
## References
[1] Lai, H. et al., Bidirectional model-based policy optimization. ICML 2020.
[2] Wang, J. et al., Offline reinforcement learning with reverse model-based imagination. NeurIPS 2021.
[3] Lyu, J., Li, X., & Lu, Z. Double Check Your State Before Trusting It: Confidence-Aware Bidirectional Offline Model-Based Imagination. NeurIPS 2022.
[4] Weissenbacher, M., et al., Koopman qlearning: Offline reinforcement learning via symmetries of dynamics. ICML 2022.
[5] Mezic, I. Spectral properties of dynamical systems, model reduction and decompositions. Nonlinear Dynamics, 2005.
[6] Brunton, S., et al., Discovering governing equations from data by sparse identification of nonlinear dynamical systems. PNAS, 2016.
[7] Champion, K., et al., Data-driven discovery of coordinates and governing equations. PNAS, 2019.
---
Rebuttal Comment 1.1:
Comment: Thank you very much for the responses which address and answer my questions. After reading the rebuttal and reviews of other reviewers, I decide to keep my score. | Summary: This work introduced a Time-reversal symmetry enforced dynamics model, which leverages the consistency between a pair of forward and reverse latent dynamics for improving the sample efficiency of offline RL algorithms. Conducted experiments demonstrate the effectiveness of the proposed method.
Strengths: - The proposed method makes sense in improving the sample efficiency of offline RL algorithms.
- This work is clearly presented so that it is easy for readers to catch up with the main ideas.
Weaknesses: - I suggest authors to strengthen the analysis on the rationale, i.e., why can the proposed method improve the sample efficiency of off-line RL algorithms. This will make the paper more insightful.
- This work lacks sufficient discussion on the relation/comparison with related works, especially for those also exploiting the consistency, e.g., PlayVirtual [1] and some others cited by [1].
[1] Yu, Tao, et al. "Playvirtual: Augmenting cycle-consistent virtual trajectories for reinforcement learning." Advances in Neural Information Processing Systems 34 (2021): 5276-5289.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. Why can the proposed method improve the sample efficiency of off-line RL algorithms? Pls give in-depth and convincing analysis.
2. What are the relations between this work and those metioned in the weakness part?
3.An open question: Are there any ideas to extend the core idea of this work to Online RL algorithms?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Pls see the weakness and questions parts.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > **W1 & Q1. Rational of the sample efficiency improvement of TSRL**
The sample efficiency of TSRL is a joint result of a series of elegant and closely related design choices:
- Firstly, learning fundamental/parsimonious dynamics is essential to improve model performance under small datasets. Learning fundamental properties within data, in principle, does not require large or high-coverage datasets. Moreover, a fundamental model can also help remove spurious correlations and maximally promote model stability and generalization under limited data (see the discussion in L54-65 in our paper).
- T-symmetry happens to be one of the simplest and most fundamental properties that we can leverage to enforce such a fundamental property (see the discussion in L54-65, L104-129). We further extend T-symmetry to make it broadly applicable to generic MDP settings (see the discussion in L113-129).
- By modeling the latent dynamics to be first-order ODE systems (forcing the latent dynamics to be mathematically simple) and enforcing extended T-symmetry, we can obtain a well-regularized dynamics model (TDM). This model can provide a more effective state-action representation for offline RL (see the discussion in L184-199 and our ablation on representation learning in Fig. 4).
- The deviation in latent actions and the consistency with T-symmetry specified in TDM actually provide another perspective to detect unreliable or non-generalizable samples. These deviations can serve as a new set of policy constraints to replace the highly restrictive OOD regularizations in existing offline RL algorithms (see the discussion in L200-207, 216-222). Moreover, as we have shown in Fig. 7, this new type of policy constraint allows the policy to reliably generalize to OOD regions without being constrained to the training data distribution, leading to better small-sample performance (see L307-319 for detailed discussion).
- Lastly, compliance with T-symmetry also enables a reliable latent data augmentation scheme that further addresses the limited size of training data (see empirical analysis in L287-293).
> **W2 & Q2. Insufficient discussion on the related works, especially PlayVirtual [1].**
We thank the reviewer for providing this reference, and we will add it to our final paper. Regarding the relationship and difference between TSRL and PlayVirtual:
- **Problem setting and model construction:** PlayVirtual is designed for online RL settings where the dynamic models can be continuously improved with fresh interaction data to mitigate multi-step rollout compounding errors. While in offline small dataset settings, multi-step rollouts can lead to considerable compounding errors, even adding cycle-level forward & backward consistency may not be sufficient to properly regulate the model.
On the other hand, our proposed TDM **does not perform any rollout generation**. It directly regulates the T-symmetry consistency between ODE latent forward and reverse dynamics at each step ($F(s,a) = -G(s',a)$), which applies a much stronger regulation to improve the model's small-sample performance.
- **Model learning:** PlayVirtual learns the forward and backward dynamics models with the supervision of the future/previous state representations. While in our TDM, the latent forward and reverse dynamic model are learned as first-order ODEs ($F(s,a)={dz_s \over dt}$ ; $G(s',a)={-dz_s \over dt}$ ) to extract the essential patterns within the dynamical system.
- **Integration with RL algorithm:** PlayVirtual directly incorporate the learned dynamics with standard online RL algorithm (e.g., SAC and Rainbow). While in our paper, we propose a new offline RL algorithm TSRL that closely integrates the learned TDM, allowing the full use of the dynamics-enhanced information from TDM to improve offline RL performance under limited data.
> **Q3. Are there any ideas to extend the core idea of this work to online RL algorithms?**
We thank the reviewer for this thoughtful comment. In principle, our method is also applicable to the online setting, but there are several modifications that could be introduced to our method to enable the best performance:
- A small set of initial samples need to be collected in the environment using an initial or random policy to warm start the learning of TDM. This will make the learned representation evolve smoothly during the early RL training stage.
- The strong regularizations in TDM could be properly relaxed, as online samples can be used to continuously improve model learning. We can trade off some regularization to promote model expressiveness.
- The offline backbone RL algorithm in TSRL needs to be changed to an online RL algorithm with some exploration schemes to remove the pessimism. The T-symmetry regularized policy constraints are probably not necessary for the online setting and can be removed.
---
Rebuttal Comment 1.1:
Title: Thanks for your responses.
Comment: I confirm that I read your responses, and suggest you to add the contents in your rebuttal to the revised paper.
---
Rebuttal Comment 1.2:
Title: Reviewer, please submit your response to author's rebuttal
Comment: Please read the rebuttal of the authors and respond. | Summary: The paper investigates the time-reversal symmetry of forward and reverse dynamics in reinforcement learning (RL). The authors propose a Time-reversal symmetry enforced Dynamics Model (TDM) that models the consistency between forward and reverse dynamics. Using the TDM, they further propose an offline RL algorithm that leverages the learned TDM in three ways: 1. Using the representation from TDM for value function learning; 2. Using TDM to penalize OOD samples; 3. Using TDM to moderate useful data augmentation. Extensive experiments demonstrate that the proposed method outperforms a number of baselines, especially, the proposed method can learn a better policy with significantly fewer samples.
Strengths: 1. The proposed method is novel and backed by convincing experimental results.
2. The concept of time-reversal symmetry could potentially be widely applied in RL, as it represents a fundamental structure in many RL problems.
Weaknesses: 1. The proposed method leverages a learned dynamic model for offline RL, whereas most of the baselines are model-free (except for MOPO), which seems somewhat unfair. In particular, the proposed method appears somewhat related to the Dreamer [A, B] method, which also leverages a dynamic model for data augmentation. I am curious about the authors' thoughts on comparing with Dreamer.
2. The claim of leveraging time-reversal symmetry is slightly concerning as the method essentially learns the forward and reverse dynamic models instead of leveraging the time-reversal symmetric property of the MDP (e.g., P(s, a, s')=P(s', a, s) in certain scenarios).
[A] Hafner, Danijar, et al. "Dream to Control: Learning Behaviors by Latent Imagination." International Conference on Learning Representations. 2019.
[B] Hafner, Danijar, et al. "Mastering Atari with Discrete World Models." International Conference on Learning Representations. 2020.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. I wonder how the authors ensure the fairness of their experiments, given that the proposed method appears more complex than the baselines. Do all the baselines have a similar amount of trainable parameters as the proposed method?
2. Instead of ensuring the T-symmetry using an extra loss term, is it possible to constrain the neural network architecture so that the model respects the T-symmetry by definition (like an equivariant neural network architecture)?
3. The proposed method doesn't seem to be specific to offline RL. Have the authors tried to use it in online RL?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The authors do address the limitations of their work. However, the discussion could be more comprehensive. For example, under what circumstances would the T-symmetry break? Are there scenarios where the proposed method might fail?
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We really appreciate the reviewer for the positive feedback and valuable comments.
> **W1. Comparison mostly to model-free methods rather than model-based methods. In particular, the comparison with Dreamer.**
- First, we'd like to highlight that our approach is very different from the typical model-based methods. Most existing model-based methods use the learned dynamics model to generate imaginary rollouts to facilitate policy learning. Whereas in our approach, we **do not** use the model for rollouts generation, but rather only used it to learn well-behaved representations as well as consistency metrics for policy constraints and data augmentation. If the reviewer inspects our value and policy learning procedure in Section 4. it actually shares more similarities with many model-free policy constraint offline RL methods.
- We adopt such a design as under offline small dataset settings, it is generally not possible to learn an accurate dynamics model for reliable rollout generation. A poor dynamics model can negatively impact policy learning. However, learning reasonable representations and consistency check metrics still remains possible if we learn a more fundamental dynamics model with strong physics-informed regularizations. This is evident as shown in Fig. 1, 3, and Table 1 in our paper, that model-based offline RL method like MOPO suffers from severe performance degradation with reduced data size as compared to model-free methods and our proposed TSRL.
- Lastly, Dreamer v1&v2 are online RL methods, which allow continually acquiring fresh environment samples to improve the accuracy of the model. While in our setting, only a very small offline dataset is given. This significantly exacerbates the difficulty of model and policy learning. Note that even being sample-efficient, in the Dreamer paper, it still needs $5\times 10^6$ online environment samples to learn good policies for many tasks, while in our setting, we only provide each algorithm 10k offline samples.
> **W2. Not leveraging the time-reversal symmetric property of the MDP (e.g., P(s, a, s')=P(s', a, s) in certain scenarios)**
- If the reviewer closely inspects our proposed extended T-symmetry, we introduce two ODE systems $F(s,a) = \dot{s}$ and $G(s',a)=-\dot{s}$, where $\dot{s}=s'-s$. This is equivalent to constructing two consistent systems of $(s,a)\rightarrow s'$ and $(s',a)\rightarrow s$, essentially similar to the $P(s, a, s')=P(s', a, s)$ relationship in MDP mentioned by the reviewer, with the difference that we further require both dynamics are ODE systems.
- Due to the above construction, our proposed extended T-symmetry is more broadly applicable as compared to the original T-symmetry (i.e., $F(s,a) = \dot{s}=-G(s',a')$) in MDP settings, as the latter can sometimes be broken by irreversible actions. Please refer to L113-129 in our paper for a detailed discussion.
> **Q1. Fairness regarding using more complex model. Do all the baselines have a similar amount of trainable parameters as the proposed method?**
- In this paper, we propose an RL algorithm rather than a supervised learning model, the setting is different and the number of parameters does not necessarily determine the performance. Note in the Dreamer v1&v2 papers, these more complex methods are also compared with lightweight methods like DQN and A3C.
- To ensure comparability, we use the same architectures for Q-networks and policy network as other baseline algorithms in our experiments. The only addition is the incorporation of TDM for representation learning and consistency metric computation. Note that simply adding extra model/parameters in representation learning may not yield comparable performance improvements as in TSRL. As reported in Fig. 4 of our paper, baselines using SimSiam and AE-fwd-rep representations have a similar amount of parameters as in TSRL, but their performance improvements are less significant.
> **Q2. Possibility of constraining the neural network architecture to enforce T-symmetry by definition (e.g. equivariant NN)?**
- We appreciate the reviewer for the insightful comment. We actually explored the possibility of employing equivariant NN architectures to enforce T-symmetry, but we found some applicability issues:
- Most existing equivariant NN techniques leverage relatively simple and explicitly known equivariant mapping/transformation of the system. However, in our problem, the forward and reverse ODE dynamics are unknown and need to be learned. Enforcing equivariant relationships between two unknown systems is less straightforward to be implemented in a NN architecture.
- In contrast, our method embraces simplicity and generality, without relying on excessive data or task-specific knowledge. We enforce T-symmetry by simply incorporating a few supervised loss terms. Nevertheless, constraining the NN architecture is a valuable idea, we will continue to explore this direction in our future works.
> **Q3. Applicability to online RL?**
- In principle, our method can also be used in online RL. However, as discussed in W1, online RL allows continuous collection of fresh samples through online interaction, hence the accuracy of the model can be improved during training. This makes heavy regularization used in TDM less necessary for online settings, especially in the later part of the training.
- We focus on the small-sample offline RL setting, as it poses some more demanding challenges, and has many real-world deployment scenarios. Only a very small number of samples are given, one has to incorporate strong regularization to ensure reasonable generalization as well as alleviate distributional shift in offline policy optimization.
> **Limitations: more discussion on the cases when T-symmetry breaks**
- Please refer to our response to W2. Our proposed extended T-symmetry improves over the original T-symmetry definition for generic MDP settings and is not susceptible to the negative impact of irreversible actions.
---
Rebuttal Comment 1.1:
Comment: The reviewer thanks the authors for their thoughtful rebuttal. Most of my concerns are addressed, but I would like to clarify on W2.
I apologize for mistyping the equation in my review, what I meant was $P(s, a, s')=P(s', \mathbf{-a}, s)$, which is similar to the original T-symmetry $F(s, a)=\dot{s}=-G(s', a')$. I understand that the proposed approach is not limited to irreversible actions, but the original T-symmetry is. However, would the original T-symmetry be more efficient when the assumption of action being reversible is satisfied? Essentially, the original T-symmetry has the potential to automatically generalize to reversed transitions, but the proposed method cannot.
This is by no means criticizing the proposed approach, instead, I acknowledge that the proposed method resolves the reversible assumption. My point is that, first, as mentioned above, though the original T-symmetry is more constrained, does it have advantages when the assumption is satisfied? Second, IMO the word `T-symmetry` implies the original T-symmetry where the transition is symmetric when the time is reversed. I am slightly concerned about the wrong implications it might have for the readers.
---
Reply to Comment 1.1.1:
Title: Response to the Reviewer P7X7 Comments (1/2)
Comment: We really appreciate the reviewer for the thoughtful comment. Regarding the new comments on T-symmetry:
- The original T-symmetry for physical systems is actually defined on state measures $\mathbf{x}\in\Omega$ of dynamical systems (i.e., $d \Gamma(x)/dt=-F(\Gamma(x))$, $\Gamma$ is the time invertible transformation), rather than state-action pairs $(s,a)$ for typical control problems. In these systems, the evolution on $\mathbf{x}$ is determined by some underlying physical laws. While for control problem, we have an external control policy to influence state evolution. Hence there has to make some adaptation to the original T-symmetry in order to make it usable in control problems, especially for the MDP setting.
- In our adapted *extended T-symmetry* for the MDP setting, we preserved most of the characteristics of the original T-symmetry definition, such as using ODE forward and reverse dynamics, and time-reversal on states (i.e., $F(s,a)=\dot{s}=-G(s',a)$). We model the reverse dynamics as $G(s',a)$ rather than $G(s',a')$ as it can overcome the irrevsible action issue while still roughly following the core idea of T-symmetry.
- Moreover, in our abstract, introduction, preliminaries as well as method sections in our paper, we only refer to our treatment as the "extended T-symmetry" rather than "T-symmetry" to demonstrate that it is an adaptation on the original T-symmetry.
- Finally, in the early development stage of our work, we actually tested using $G(s',a')$ as the latent reverse dynamics, but it leads to inferior performance as compared to our final form $G(s',a)$, probably due to the inability to capture the irreversible actions. We will reproduce these results and reply in a follow-up post in the next 2 days. We hope this can address your remaining concerns about our method. | null | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Minimum-Risk Recalibration of Classifiers | Accept (spotlight) | Summary: Background:
Generating reliable probability estimates alongside accurate class labels is crucial in classification tasks. Calibration, which refers to the alignment between predicted probabilities and empirical frequencies of labels, is highly desirable in various applications. However, many machine learning algorithms lack inherent calibration.
This paper aims to address these issues in two ways:
1. Development of a unified framework for recalibration that incorporates both calibration and sharpness in a principled manner.
2. Proposal of a composite estimator for recalibration under label shift, which converges to the optimal recalibration and enables sample-efficient adaptation of classifiers to label-shifted domains.
Strengths: 1. This paper is exceptionally well-written, with a clear and logical structure, making it easily understandable to a wide audience.
2. The theoretical foundation of this paper is solid, the proof process is very detailed, making it easy for people to follow.
3. The algorithm proposed by the author is simple and efficient, making it easy to implement.
Weaknesses: 1. Although the author provides solid theoretical analysis, this paper lacks enough experiments to demonstrate the effectiveness of their method.
2. The author employs three assumptions, and in my view, assumption 2 is a rather strong one, which is difficult to guarantee in practical applications.
3. The experiments in this paper were conducted primarily on toy datasets, and the author do not provide any experiments on how to apply Recalibration in real-world applications.
Technical Quality: 4 excellent
Clarity: 3 good
Questions for Authors: 1. In Eq. (9), I understand that the function $\hat{h}(z)$ calculates the expected label within the bin that $z$ belongs to. So, why is $\hat{h}$ a monotonically increasing function of $z$ and its growth pattern similar to the cumulative distribution function (CDF) of $z$? Although you have made Assumption 2, can I understand that this assumption implies that $y$ must follow a specific distribution form in order to satisfy the monotonicity of $\hat{h}$?
2. In the experimental section, the author only presents a very simple toy scenario, and I am quite curious about the performance of the author's algorithm in more complex scenarios. For instance, in real-world classification datasets like cifar-10-long-tail or cifar-100-long-tail, how does the recalibration effect of more complex models, such as neural networks or decision trees?
3. In Fig. 1 (a), as the value of $n$ increases, regardless of the magnitude of $B$, $R^{cal}(\hat{h})$ exhibits a noticeable decrease. Does this imply that when the dataset size $n$ becomes sufficiently large, $R^{cal}(\hat{h})$ has already reached a level where recalibration is unnecessary?
If the author can answer my question, I will change my score.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 3 good
Contribution: 2 fair
Limitations: yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the valuable feedback on our work. With gratitude for the positive evaluation, we are committed to further clarifying our contributions by addressing the concerns and questions raised. To streamline this endeavor, our response is organized to initially address the highlighted weaknesses, followed by detailed point-by-point responses to the specific questions.
### **Weaknesses**
#### 1\. Insufficient experiments:
We believe that the numerical evidences presented in our original submission effectively validate both the theoretical claims and the efficacy of our method. Specifically, we validate the main results outlined in Section 4, including the risk upper bounds in Theorem 1 (Figure 1) and the optimal choice of the number of bins $B$ (Figure 2). Additionally, the significant contributions of Section 5, such as the explicit form of optimal recalibration under label shift and a two-step recalibration estimator with theoretical guarantees, are substantiated by Table 1.
While we maintain confidence in the sufficiency of our original numerical experiments to validate our theoretical findings, we also acknowledge the value of extending numerical studies, e.g., by including comparisons to other methods. To address this, we have conducted additional numerical experiments that compare the performance of our proposed UMB method against three other approaches: UWB [16], Platt scaling [34], and the hybrid method [26]. The results of these supplementary comparisons are included in the Author Rebuttal report.
#### 2\. Validity of Assumption 2:
We appreciate the reviewer's recognition of the potential stringency and practical challenges related to ensuring Assumption 2. However, it's important to highlight that while this assumption might appear stringent, it remains sensible for a reasonably well-performing classifier $f$, for which the underlying trend is expected to persist even if predicted probabilities $f(x)$ are imprecise. Specifically, we should expect $f(x_1)\leq f(x_2)$ when $P[Y=1|x_1]\leq P[Y=1|x_2]$ for a reasonable classifier.
Additionally, it's worth noting that similar monotonicity assumptions are prevalent in related literature. For instance, such assumptions are utilized in recalibration through isotonic regression [44], maintaining accuracy via order-preserving maps [45], and selecting bin numbers for monotonicity preservation [37].
#### 3\. Lack of experiments with real-world datasets:
Our main emphasis in this paper lies in presenting a fresh recalibration perspective alongside a thorough analysis of a working method, making it primarily theoretical in nature. Although the application of our framework to real-world contexts holds potential interest, we believe its incorporation is not essential within the scope of the present paper. As such, we regard this as a captivating and promising avenue for future endeavors—to extend our comprehensive framework, encompassing both calibration and sharpness risks, to real-world applications.
### **Questions**
#### 1\. Monotonicity of $\hat{h}$?
First of all, we want to clarify that Assumption (A2) enforces the monotonicity of $h^*$, NOT imposing the monotonicity of $\hat{h}$. As the reviewer observed, $\hat{h}$ is an estimate of $h^*$ computed as the expected label within the bin that $z$ belongs to, and may NOT be monotone due to the finite sampling effect. However, we anticipate concentration of $\hat{h}$ to $h^*$, especially with narrow bins and ample per-bin data, leading to monotone increasing $\hat{h}$ with high probability.
Furthermore, the reviewer accurately notes that Assumption (A2) concerns the distribution of $Y$. Specifically, the monotonicity of $h^*$ imposes a monotonicity requirement on the conditional distribution of $Y$ given $Z=f(X)$; recall $h^*(z)=E[Y|Z=z]$ from Eq. (7).
#### 2\. Applications to more complex scenarios:
1. **Real-world data:**
We agree with the reviewer that applications of our framework to real-world classification tasks would be interesting. Nevertheless, it's important to highlight that many real-world tasks involve **multi-class** classification. While it is conceivable to address this, e.g., by transforming it into multiple pairwise binary classification tasks, multi-class classification scenarios lie beyond the scope of the current paper. Thus, we regard it as an intriguing direction for future exploration.
2. **Complex models:**
Post-hoc recalibration treats probabilistic classifiers as black-box entities, only utilizing their predicted probabilities as inputs. Therefore, our recalibration framework applies to **any** probabilistic classifiers, including neural networks and decision trees. As far as we understand, the performance of the model and recalibration are not directly linked. Although different models might yield distinct joint distributions of probabilistic predictions $f(X)$ and labels $Y$, indirectly influencing the fitted recalibration function $\hat{h}$, its performance will remain consistent if it holds distribution-free calibration guarantees [18].
#### 3\. (Un-) necessity of recalibration for large $n$?
We appreciate the reviewer's keen observation regarding the decreasing trend of $R^{cal}(\hat{h})$ as $n$ increases in Fig. 1(a). This behavior signifies that $\hat{h}$ effectively calibrates $f$, resulting in a well-calibrated composition $\hat{h}\circ f$ for large $n$; it's important to clarify that this doesn't necessarily imply $f$ is inherently well-calibrated. Hence, if the initial classifier $f$ lacks calibration, it remains relevant to estimate a post-hoc recalibration function $\hat{h}$. Additionally, we'd like to emphasize that the aspect of sharpness is entirely overlooked in this context; when $B=1$, the composition $\hat{h}\circ f(x)\to EY$ (for all $x$) as $n\to\infty$, leading to a constant classifier that's calibrated but devoid of information. | Summary: The paper studies a very relevant problem of post-hoc calibration in probabilistic classifiers. There is a great body of work on calibrating probabilistic classifiers so that the predicted probabilities match the empirical label frequencies in the popular machine learning literature. However, calibrating probabilistic classifiers so that they retain their sharpness / refinement is not studied much. The paper focuses on the post-hoc calibration with retained sharpness properties from first principles. Its entry point is the classical decomposition of mean squared error of the classifier into sharpness and (mis)calibration. Traditionally, both the measures were important, but the works in machine learning literature has mostly focussed on (mis)calibration only. There are important contributions in this paper: a) proposing recalibration risk measure which attains its minimum value only when both sharpness and the calibration requirements are met. b) uniform mass binning algorithm to approximate the optimal recalibration map. c) Theoretical results on the binning scheme and binning scheme to bound both the calibration risk and the sharpness risk. d) Formal result on the trade-off between calibration and sharpness. e) recalibration studies in a simple (but important) distribution shift with label shift. All of these results are for binary classification setting.
Strengths: 1. It is clear that the paper makes theoretical contributions to an important problem. It has been observed that major recalibration algorithms in machine learning literature not just ignore the sharpness measure, but they degrade it [1]. There is a trade-off between these quantities, and this paper formally state this with some actionable insights to balance this trade-off.
2. Theorem 1 generalises a previous result in the literature with an additional insight on the sharpness risk. Generalisations and connections to previous results is a sign of great work.
3. While the results on label shift are easy consequences of the general recalibration results presented earlier in the paper, it is still great to see this extension.
4. In my opinion, the paper is significant and relevant to the machine learning community. I believe the paper will spur more work on post-hoc calibration methods.
5. The paper is excellently written, easy to read and understand.
[1] Aditya Singh et al. On The Dark Side Of Calibration For Modern Neural Networks. (2021) (http://www.gatsby.ucl.ac.uk/~balaji/udl2021/accepted-papers/UDL2021-paper-074.pdf)
Weaknesses: I do not have major concerns with this paper. I have some questions though (please see below):
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: 1. The optimal recalibration function is same as the canonical calibration map defined in [2] (Section 3.1). While it is clearly interesting to see that a canonical calibration map is the optimal recalibration map, could authors comment on this connection? The optimal recalibration map is certainly a calibrated function (as the Proposition 1 in [2] states), I am curious about some insights on why it would also be a map to minimise the sharpness risk.
2. Additionally, [2] also propose a sort of general methodology to estimate this canonical calibration map (Section 4.1). While they do not comment on the partitioning scheme, it would be interesting to draw insights on general partitioning schemes and the bound on the recalibration risk.
3. While the authors mention that extending the results presented in the paper to multi-class setting is a future research direction, could authors see that the estimator provided in [2] could be useful for multi class case (as [2] do not restrict itself to binary classification)?
[2] Juozas Vaicenavicius et al. Evaluating model calibration in classification. (2019)
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 3 good
Limitations: Obviously, there are limitations to this work, as there are with any works. However, the authors have been very clear in stating them.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We would like to express our sincere gratitude for the comprehensive understanding and the positive evaluation from the reviewer, especially in acknowledging our theoretical contribution to an important field while providing actionable insights, and recognizing the impact of this work on the community.
We also appreciate the invaluable questions and suggestions from the reviewer, which has led us to think deeper in terms of extending our theory to the multi-class scenario while drawing connections from a previous paper [41] (we use the reference number in the paper to avoid confusion).
We present our perspectives as follows and hope these will address the reviewer's questions and lead to insightful discussions.
### **Questions**
#### 1\. Connection with the canonical calibration map in [41]:
The reviewer's observation is accurate; the optimal recalibration function in our work is the same as the canonical calibration function defined in Eq. (4) of [41] in the binary classification setting. Below, we elaborate on this connection in three aspects.
1. **Equivalence:**
In the framework of [41], a classifier $f: \cal{X} \to \cal{Y}$ is considered *reliable* if and only if $P[Y\in\cdot|f(X)]=f(X)$, that is, the conditional distribution of the target class, given any prediction made by $f$, precisely matches that prediction. This aligns directly with the notion of a *(perfectly) calibrated* classifier, as defined in our paper through Definition 1.
2. **Calibration:**
As pointed out by the reviewer, the optimal recalibration function is calibrated as it follows the form in Proposition 1 in [41].
3. **Sharpness:**
In our work, we define the optimal recalibration function through the minimum risk criterion, requiring the achievement of zero recalibration risk. Remarkably, this implies the optimal recalibration function has 0 calibration risk and 0 sharpness risk according to Proposition 1. Alternatively, we can also examine the definition of the sharpness risk (Definition 4) for the optimal recalibration function $h^*(z)=\mathbb{E}\left[ Y \mid f(X) = z\right]$. Observing $h^*(f(X))$ as a function of $f(X)$ and applying towering property, we have
\begin{align}
\newcommand{\EP}{\mathbb{E}}
\EP \left[ Y \mid h^*(f(X)) \right]
= \EP \left[ Y \mid \EP \left[Y \mid f(X) \right] \right]
= \EP \left[ \EP \left[Y \mid f(X) \right] \mid \EP \left[Y \mid f(X) \right] \right]
= \EP \left[Y \mid f(X) \right],
\end{align}
which establishes that $R^{sha}(h^*) = 0$.
Intuitively, $h^*$ takes the form of expectation of $Y$ conditioned on the full prediction $f(X)$, in a sense carrying all the "information" within $f(X)$ about $Y$. Therefore, using $h^* \circ f$ will not reduce the explained variance by $f$, thus achieving 0 sharpness risk.
#### 2\. Exploring insights from partitioning schemes in [41]:
The reviewer has identified a major challenge for recalibration using a general partitioning scheme.
In particular, the consistency results presented in Theorem 1 of [41] lay a foundation for estimating calibration error, prompting the question of deriving convergence rates for calibration risks. However, effectively controlling the sharpness risk can be intricate when an explicit description of a partitioning scheme is absent.
One insight we draw from [41] is that the maximum diameter of sets in the partition can be used to effectively measure the granularity of the partition, which may be useful in developing sharpness risk bounds.
Considering these aspects, we recognize that bounding the recalibration risk for overall recalibration risk for general partitioning schemes is a promising, albeit challenging, trajectory for future research endeavors.
We appreciate the reviewer's thought-provoking comment, as it has motivated us to delve into the intricacies of general partitioning schemes and derive these valuable perspectives.
#### 3\. Potential usefulness of partitioning method for multi-class settings:
We perceive the partition-based estimator in [41] as a promising candidate for extending our framework to multi-class scenarios, representing a natural multi-dimensional progression from the histogram-binning estimator. Beyond the specific estimator outlined in that work, we also recognize an additional potential application of [41] stemming from its introduction of a calibration lens. This lens accommodates diverse facets of partial calibration, a concept of particular interest within the realm of multi-class classification [16, 19].
Furthermore, the work by [41] establishes the almost sure consistency of a binned (=partitioned) estimator for expected miscalibration through Theorem 1. This achievement could potentially serve as a foundational step in establishing the convergence of the binned estimator's calibration risk towards zero, offering a promising initial stride in this direction.
---
Rebuttal Comment 1.1:
Title: Post rebuttal comment
Comment: Thanks to authors for the detailed response. I'm certainly glad that authors found my comments useful, and have been able to draw further connections. Unfortunately, I haven't been able to engage in the discussion to the capacity that I'd have hoped due to some personal issues, but I find the response very insightful. I remain confident that the current paper should appear at NeurIPS, and hence I'm also increasing my score. | Summary: This paper introduces the concept of minimum risk recalibration, utilizing Mean Squared Error (MSE) decomposition. The authors provide justification for their approach by demonstrating that minimizing the proposed risk yields simultaneous minimization of the calibration risk while preserving the sharpness of the probability forecaster. Furthermore, the authors employ the MSE decomposition to analyze the recalibration method known as UMB. Theoretical analysis reveals that selecting an appropriate number of bins allows for achieving a balance between the calibration risk and sharpness. Expanding on their findings, the authors apply their methodology to address the problem of recalibration in label shift. They provide theoretical and experimental validation for their approach in the context of label shift, further supporting the effectiveness of their proposed method.
Strengths: 1. The authors provide a rigorous introduction to key statistical measures essential for the recalibration task, such as recalibration/calibration risk and sharpness risk. They demonstrate that minimizing these measures can result in a well-calibrated forecaster.
2. Building upon the introduced statistics and their properties, the authors enhance the analysis of UMB. They unveil a tradeoff between calibration risk and the preservation of sharpness by establishing a high-probability error bound. Furthermore, they leverage this upper bound to guide the selection of the number of bins in the UMB method, highlighting the practical significance of the proposed bound and decomposition.
3. The authors extend the application of their proposed method to address the challenge of label shift, showcasing the validity of their work in practical downstream tasks.
4. The paper thoroughly discusses the mildness of the assumptions made and experimentally validates the non-trivial nature of the proposed bounds and schemes.
Weaknesses: While the outcome of this study is fruitful, it is encouraged that the authors delve into a further discussion regarding the potential extension of their framework to the multi-class scenario. This extension would provide additional intuitive information and enhance the applicability of their proposed methodology.
Technical Quality: 4 excellent
Clarity: 3 good
Questions for Authors: Please refer to the Weakness part.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 3 good
Contribution: 4 excellent
Limitations: The authors have analyzed the limitation of the used method in the appendix.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We are overwhelmingly grateful for the reviewer's in-depth understanding and unreserved recognition of our work, encompassing nontrivial concepts, rigorous analysis, practical significance of the tradeoff, applications to downstream tasks, assessment of assumptions, and experiment design for theory verification.
We indeed agree with the reviewer that one of the most promising directions forward is to extend our framework to the multiclass scenario.
We are delighted to provide further discussion in this paper to enhance intuitive understanding and practical applicability of our methodology.
In the context of multi-class classification, the concept of calibration takes on various forms [19].
An intriguing avenue for extension involves canonical calibration [41], which directly extends our methodology.
A major challenge of the extension lies in designing the binning scheme in a multidimensional space, a complexity that has been explored in [19].
While calibration guarantees have been explored in [35], establishing sharpness risk bounds within this multi-class context remains an engaging pursuit.
The interplay between calibration and sharpness could potentially guide the development of a binning strategy that balances these aspects in multi-class classification.
We extend our appreciation to the reviewer once more for this invaluable suggestion of including additional discussion on the multi-class scenario.
Furthermore, we would appreciate the reviewer's further input that could enrich this discussion and enhance the value of our work.
---
Rebuttal Comment 1.1:
Comment: Thanks to the authors for providing related works in the multi-class scenario. I will keep my score since the difficulty of the extension to the multi-class scenario is inherent in the task of probability calibration and does not weaken the contribution of this work. | Summary: This work proposes a calibration method for probabilistic classifiers. It is known that most machine learning models produce predictions with high confidence that yield a distribution different than the underlying true label distribution. The proposed method focuses on calibration without a loss on the prediction performance and is claimed to adjust for label shift.
Strengths: - This paper is written clearly with good and easy-to-follow notations. The different definitions are well-placed and help the reader to follow the flow of the paper. Overall good structure of the paper.
- The equal focus during the calibration on both calibration and sharpness goals is well-motivated and importantly this setting. The theory is developed rigorously and the different choices and assumptions are justified.
- This paper extends the proposed work to label-shfit setting and provides a good discussion on "recalibration under label shift" .
- This paper provides experimental results that show the validity of the theoretical work proposed.
Weaknesses: I think this paper lacks the following two points:
- A discussion in section 1.1 highlighting to the reader how this paper approaches the recalibration problem differently than existing work. Only the work in [26] has been addressed.
- Section 6 does not include a discussion about the empirical results of previous works.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: N/A
Confidence: 1: Your assessment is an educated guess. The submission is not in your area or the submission was difficult to understand. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the valuable feedback on our work. With gratitude for the positive evaluation, we are dedicated to clarifying and enhancing our contributions by addressing the raised concerns.
### **Weaknesses**
#### 1\. Insufficient discussion on distinction from existing work:
The reviewer's observation is valid, and we appreciate the opportunity to address this concern. While we have selected [26] for the main comparison, it's important to note that our engagement with existing work extends beyond this comparison. In Section 4, alongside the comparison with [26], we underscore in Remark 1 that our calibration risk bound aligns with [18] up to a constant factor in the failure probability. In Section 5, we compared target sample complexity with [28,2,12] in Remark 5, pointing out that our method, which only uses target labels, achieves the same order of risk bounds as the methods using only target features.
We would like to provide further insight into our rationale for selecting [26,18] as the basis of comparison in Section 4. Our objective is to holistically and quantitatively address both calibration and sharpness preservation. A main contribution of our work is to develop risk bounds for both calibration and sharpness risks, which subsequently inform the optimal choice of bin number in uniform mass binning -- a recalibration method extensively studied theoretically and applied in practice. To the best of our knowledge, [26,18] represent the most competitive theoretical works that are relevant to our goal, as they provide state-of-the-art calibration error bounds under their specified assumptions. In contrast, [32,37] lack theoretical underpinning and primarily focus on empirical assessment.
We value the reviewer's perspective feedback and appreciate the opportunity to clarify our choices of baseline works. Should there be additional works that are better suited for comparison with our theoretical framework, we would appreciate the reviewer's further input, which would improve the robustness and comprehensiveness of our work.
#### 2\. Lack of empirical results comparison and discussion with prior works:
We greatly appreciate the reviewer's observation, which makes us realize the importance of including empirical comparisons between the proposed methods and previous works.
In addition to our original experiments which validated our theoretical claims, we have conducted additional numerical experiments to compare the performance of the UMB method with three other approaches—UWB [16], Platt scaling [34], and hybrid method [26]. The results and discussion of these comparisons have been incorporated into the Author Rebuttal report. This inclusion enhances the comprehensive evaluation of our work and strengthens its practical implications.
---
Rebuttal Comment 1.1:
Comment: Thanks to the authors for addressing my concerns. I will keep my score as I am unfamiliar with the overall related work as shown in my confidence score. | Rebuttal 1:
Rebuttal: We appreciate the reviewers' valuable feedback. Our Author Rebuttal addresses recurring themes, offering clarification on methodological contributions and presenting supplementary numerical evidence. Comprehensive point-by-point responses are available in individual rebuttals.
### **Summary of contributions**
**Key highlights.**
Allow us to succinctly underscore the key contributions in this paper.
1. We introduce a novel quantitative framework that interlaces both sharpness and calibration, thereby yielding further insights for the recalibration problem.
2. Applying this framework to histogram binning method, we obtain finite-sample error bounds for the calibration risk and the sharpness risk.
3. Our analysis illuminates a principle guiding the selection of an optimal number of bins, notably identifying $B = O(n^{1/3})$ where $n$ is the calibration dataset size.
4. Considering the label shift, an exemplar of distributional shift, we identify the optimal recalibration function while proposing a pragmatic two-step estimator, fortified with a convergence guarantee.
We appreciate the recognition of our work's theoretical significance by Reviewers 2, 3, 4, and 5. Reviewer 1 also partially acknowledged this contribution, with specific inquiries that we addressed comprehensively in our individual response.
**Significance of contributions.**
We further elaborate on the importance of our work as follows.
1. **Holistic approach:**
Our paper makes a distinct contribution by addressing both sharpness and calibration concurrently, bridging a gap that has persisted within recalibration research. The interplay between these elements, often overlooked, profoundly influences recalibration outcomes. By holistically integrating these aspects, we introduce a comprehensive and principled solution, marking a substantial advancement.
2. **Foundational insight:**
Our proposed method based on histogram binning -- a pragmatic solution among potential alternatives -- and its analysis illuminate a novel recalibration perspective. We underscore that this paper is primarily aimed at establishing a conceptual framework underpinned by strong theoretical foundations. This foundational comprehension carries intrinsic value, effectively complementing practical strategies. Additionally, the simplicity of our approach extends its applicability rather than serving as a limitation.
3. **Significance of optimal bin selection:**
We acknowledge the significance of bin selection, in line with recent work in histogram binning literature such as [26] and [37], emphasizing the need for improved strategies and metrics. Our contribution offers a theoretically-supported approach for optimal bin number selection, bolstering the reliability of recalibration outcomes.
### **Additional experiments**
In response to the reviewers' requests, we conducted additional experiments to compare our proposed method with prominent approaches from the literature. The results of these experiments are detailed in the attached report. Specifically, we assess the performance of four methods: our method (UMB), uniform width binning [16], Platt scaling [34], and Platt-binning [26]. These methods represent two alternative binning approaches, a parametric method, and a hybrid parametric-binning approach, respectively. Additionally, we examine two distinct data distributions that were studied in [25]: Logistic calibration and Beta calibration. The outcomes of these supplementary experiments provide further support for the assertions we made in Section 4.2 of our original submission, where we theoretically compared risk bounds.
Table R.1 presents quadrature estimates of population risks for the four methods under (a) Logistic calibration and (b) Beta calibration. Notably, Table R.1a demonstrates that Platt-binning outperforms scaling and binning methods in terms of $R^{cal}$ when the underlying model assumption is correct (Logistic model). This performance superiority can be attributed to the accurate parametric model assumption, resulting in lower sample complexity compared to nonparametric methods. This finding validates a key claim made in [26]. However, as depicted in Table R.1b, this advantage of Platt-binning diminishes when the true recalibration function deviates from the parametric family. In such cases, the nonparametric binning methods (UMB \& UWB) emerge as the top performers among the four methods.
Figure R.1 provides a visual representation of the recalibration curves for the optimal recalibration function $h^*$ and its estimates under the two distributions. Remarkably, in both Logistic calibration and Beta calibration, histogram binning methods (UWB and UMB) closely track the $h^*$ curve. Conversely, the scaling-binning approach follows the Platt scaling estimates, leading to an inherent bias from $h^*$ in Beta calibration. This visualization provides insightful evidence of the efficacy of our proposed methods.
### **Promising future research: extension to multi-class classification**
As mentioned in the original Discussion, extending our framework to multiclass classification will be an exciting and logical progression. We notice that many reviewers share this view and express interest in its application to multiclass scenarios.
However, a major challenge of the extension lies in designing the binning scheme in a multidimensional space, a complexity that has been explored in [19]. Reviewer 4 (yKq4) highlighted potential insights from partition-based results in [41] for this extension. We acknowledge that while some calibration guarantees have been explored in [35], establishing sharpness risk bounds within this multi-class context remains an engaging pursuit. We believe the interplay between calibration and sharpness could potentially guide the development of a binning strategy that balances these aspects in multi-class classification.
In conclusion, we recognize the multiclass extension as a promising direction for future research.
Pdf: /pdf/b6b1f153d288958827bae8f3a3c04479cf6049ac.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: This work looks at methods for calibrating probabilistic classifiers for Mean Squared Error by decomposing it into calibration error and sharpness errors. It gives finite sample error guarantees on both of these for the Uniform Mass Binning method. By balancing sharpness and calibration error, they also propose the optimal number of bins to use. Additionally, they look at the problem of calibrating classifiers in the case of label distribution shift between train and test. Their results show that transferring a calibrated classifier requires significantly fewer target samples compared to recalibrating from scratch. They validate our theoretical findings through numerical simulations.
Strengths: - Existing works have looked at finite sample bounds for the calibration error but not the sharpness (as mentioned in this paper). This is the first method to look at both sharpness and calibration together which is an important criterion.
- The paper is generally well written.
Weaknesses: - The main contribution is to give risk bounds for sharpness along with the calibration error. In my opinion, this is not a significant contribution. There is not any particular strategy proposed using this analysis but only choosing the number of bins.
- The second problem of using this method for label distribution shift also seems straightforward and I have also asked a related question below.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: - Why did the authors choose to analyze only Uniform Mass Binning and not other methods like Uniform width binning?
- For the label distribution shift problem, is the proposed method equivalent to just scaling all the predicted probablities according to the target distribution and then using the uniform binning technique?
- In definition 4, the sharpness risk of h over f, is this the sharpness risk of h.f - sharpness risk of f?
- Why is Assumption 3 needed for the label distribution case?
- Just to confirm, is this the first sharpness risk bound?
- In figure 2a, why is the gap increasing between the theoretical and empirical bound as n increases?
- In table 1, the different methods compared are not clear to me. Can the authors please explain?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal:
We appreciate the valuable comments and feedback. Our response is structured to address the highlighted weaknesses first, followed by detailed point-by-point responses to specific questions.
### **Weaknesses**
1\. Not a significant contribution:
We appreciate the reviewer's recognition of our paper's unique contribution in jointly addressing sharpness and calibration, introducing a new recalibration perspective. Yet, we respectfully disagree that our contributions lack significance. We elaborate on our work's importance as follows:
1. **Holistic approach:**
Post hoc recalibration is a crucial issue with a long history. Our paper advances the recalibration field by addressing sharpness and calibration together, bridging a gap in prior research. Sharpness, though often overlooked, plays a crucial role in recalibration. Relying solely on the calibration criterion may lead to suboptimal recalibration. By considering both aspects, we provide a comprehensive and principled framework to effectively tackle the problem, a notable advancement.
2. **Foundational understanding:**
Although not novel, histogram binning presents a functional approach that lays a foundational step in illuminating the recalibration perspective. Our aim is not to propose a complex or competitive strategy, but to establish a solid theoretical basis for recalibration. This foundational understanding holds intrinsic value, complementing practical strategies. Moreover, the simplicity of our method is not a drawback, but rather a blessing, as it extends practical applicability.
3. **Bin selection significance:**
We acknowledge the importance of bin selection in histogram binning literature, a well-recognized topic in calibration research. Our approach aligns with recent efforts, such as [26] and [37], which emphasize the need for improved binning strategies. Our paper contributes by proposing a theoretically-backed approach for selecting the optimal number of bins, bolstering recalibration reliability.
We believe these contributions significantly enhance recalibration and pave the way for future advancements.
2\. Perceived simplicity of the method:
The reviewer described scaling predicted probabilities via the label shift formula followed by recalibration through uniform mass binning (UMB) in Question-2. We stress a key distinction: the operation order is reversed compared to Eq. (15) or (17) in the original submission. Thus, the reviewer's method may not necessarily estimate the optimal recalibration function under label shift. Recall that our recalibration function's optimality is substantiated by Theorem 2's risk bounds.
We appreciate the reviewer's engagement and hope this response clarifies our method's nuances and strengths. Also, we reiterate that the method's simplicity underscores its broad applicability and foundational significance, rather than being a drawback.
### **Questions**
1\. Rationale for UMB:
Our choice of UMB over UWB was driven by analytical considerations. UMB's well-balanced binning property (Lemma 3 in Appendix B.1), ensuring nearly equal sample sizes per bin, aids analysis. In contrast, UWB can yield varied per-bin sample sizes depending on distributions. Consequently, UWB's calibration risk bounds conservatively rely on the smallest sample sizes among bins [17, Corollary 3], whereas UMB provides distribution-free calibration risk bounds. Moreover, UMB's well-balanced binning ($\Phi_{balance}(B,\alpha)=1$ in Lemmas 6 and 7) ensures robust sharpness risk bounds across diverse distributions.
2\. Clarification of the method:
Please see the response to Weakness-2.
3\. Definition 4:
We presume that the reviewer inquires if the sharpness risk of $h$ equals the reduction in sharpness of $f$ resulting from the application of $h$, i.e., sharpness of $f$ minus sharpness of $h\circ f$, which an accurate interpretation (see Line 146-147).
4\. Assumption 3:
Assumption (A3) is optional, and its inclusion enhances the sharpness risk bound from $O(1/B)$ to $O(1/B^2)$ (Theorem 1). For the label shift case (Theorem 2), omitting Assumption (A3) relaxes the sharpness risk bound term from $\frac{8K^2}{B^2}$ to $O(1/B)$. We opt for (A3) whenever feasible to showcase the tightest upper bound within a context reasonably common in practical scenarios.
5\. First sharpness risk bound:
To our knowledge, this indeed represents the first formulation of a sharpness risk bound. It's worth mentioning that a related study by [26] found that MSE only increases by O(1/B) by discretization (Proposition D.4, [26]), which bounds sharpness risk by O(1/B) for recalibration functions with 0 calibration risk. In our work, we integrate this insight into Theorem 1, asserting $GRP(\hat{h})\leq\frac{2}{B}$ in the absence of Assumption (A3).
It's important to underscore that [26] didn't explicitly label their excess MSE bound as a sharpness risk bound, nor did they emphasize the independent existence of such a bound beyond discretization. In fact, the bound of sharpness risk strictly implies their results as a special case. Furthermore, we enhance the sharpness risk bound to $O(1/B^2)$ by introducing Assumption (A3).
6\. Figure 2a:
Our focus is mainly on the order of the risk bound, which holds significance in asymptotic contexts. As such, the expanding gap in Figure 2-a and the uniform gap in Figure 2-b could potentially be attributed to a constant multiplicative factor. Remarkably, the theoretical and empirical optimal values for $B$ closely align in order, evident from their near-parallel trends in Figure 2(b).
7\. Table 1:
Due to page limit, we moved the detailed method explanation to Appendix E.2. We're aware of possible clarity concerns in Table 1's caption and understand the value of summarizing these details in the Experiments section. In the forthcoming camera-ready version upon acceptance, we aim to improve clarity by incorporating these explanations into Table 1's caption or integrating them into the main text.
---
Rebuttal Comment 1.1:
Title: Response
Comment: I would like to thank the authors for providing the detailed response. After reading the rebuttal and other reviewers' comments, it seems like providing guarantees for sharpness and a strategy to choose optimal bins is an important contribution to a fundamental problem. I have increased my score. | null | null | null | null | null | null |
PRIOR: Personalized Prior for Reactivating the Information Overlooked in Federated Learning. | Accept (poster) | Summary:
This paper targets injecting personalized prior knowledge into the global model, which attempts to mitigate the introduced incomplete information problem in PFL. The idea is to decouple the personalized prior from the local objective function regularized by Bregman divergence. The mirror descent (RMD) is used to extract the prior.
The authors presented convergence analysis and showed many experimental results.
Strengths: 1. The authors conducted many experiments and show improved results. For example, the proposed method has a higher deviation in missing classes in local testing and a lower deviation in global testing.
2. The idea seems to be interesting.
Weaknesses: 1. Unclear motivation and literature review.
2. Lack of an explainable summary of the main idea. Given the abstract, I think the idea seems to be interesting, but I got lost in the details and symbols when reading Section 4 and Section 5.
3. The writing requires to be improved.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: **Background**
1. In Lines 43-46, "Most of the insightful works [17, 50] propose assumptions for recovering this incomplete information, but these assumptions are implicit, which limits the way to use the information to develop personalized strategies. To address the former issue above, ...".
What is the exact problem/issue of the previous assumption? Can the authors explain more about the motivation? What is "limits the way to use the information" exactly here? I did not catch the exact challenge that the authors target here.
2. In lines 46,47, the major contribution comes from "we propose framework pFedBreD to inject personalized prior knowledge (PPK) into the one provided by a global model." Is there a research line for injecting personalized prior knowledge? What is the advantage of pFedBreD compared with them?
I see the authors elaborated on "Ablation Analysis of Personalized Prior" from Line 269 to Line 283. Can authors explain more about PPK-relevant research line in a more intuitive manner?
**Method and Framework**
3. In lines 103,104, "Exponential Family The regular exponential family (X-family) is a relatively large family... Therefore, to yield the prior, we employ the X-family..." Is the employment of X-family due to "large family"? Is there any other special advantage for X-family?
4. I can follow the equations of Section 4 from a math view. Would the authors elaborate the intuitive logic of combining these equations?
5. Section 4 is "methodology", meanwhile, Section 5 is "framework". Does the designed framework belong to the proposed methodology? Or, what is the relation between the two sections?
6. In Line 156, "Inspired by the aforementioned motivation," What is the "motivation"? I tried to search "motivation," but "motivation" appears only once in the paper...What is the "motivation" here? Is it relevant to some types of math objective functions?
7. In Line 163 and Line 164, "To solve the optimization problem in Eq. (12), we use gradient-based methods to solve the global problem ..." Based on my knowledge, the conventional method is "gradient-based methods." Is there any other special optimization in your framework?
**Experiment**
8. In Line 220, "The results of average accuracy per client are shown in Table 1." Could the authors summarize the insight conclusion of Table 1 in the main text?
9. What is the relation between "RMD" and "mh"? Are any experiments relevant to "RMD"?
**Minor**
- In Line 85, "a expectation"
- The clickable indices of reference/table/equation/etc do not work, which are not convenient for searching the relevant contents.
- In Line 113 - Line 115, long sentence with grammar error.
"In this section2, we introduce missing client-sampling information based on classic FL use EM to reduce the computational cost of the information-introduced FL problem, and propose RMD, a class of prior selection strategies, based on the E-step in EM"
- Equation 5 requires to introduce "KL" function before usage.
- In Line 168, T,R,N appear suddenly by following the main body.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The authors try to empirically address the limitation of instability in the global model with aggregation noise.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ***Comments***:
- We sincerely thank reviewer JezB for appreciating our idea. The main concern of the reviewer is about main idea and our methodology, and we answer the questions one by one and make the structure and design of our paper clear.
***Responses***:
1. [W1, W2, and W3] Motivation and readability:
- We set out the *main idea and structure* of the paper in short. This paper aims to address the *information overlooked problem* and all the followings are proposed for this. Please see [general responses](https://openreview.net/forum?id=kuxu4lCRr5¬eId=WzbbMONDyW) for detailed explanation about motivation [W1, W2] and readability improvement [W3].
2. [Q1, W1] The issue of the implicit assumption? How does it limit the way?
- The main purpose of the explicit expressions in most classical methods is for parameterization, computational convenience, and closed-form solutions, e.g., re-parameterization for variational inference, Bayesian conjugate prior.[f, g]
- In this paper, under implicit prior assumption, the complete information is not parameterized and computationally expensive in Line 125-134. The substitute is to tune this information by the way of regularization, representation or loss function design. Under explicit prior assumption, if the parameters are given, the prior distribution is *completely determined*, and the local information that comes after that, *comes exclusively from the local data*. Thus, we can design theoretically supported strategies and directly with parameters.
3. [Q2, W1] Is there a research line for injecting personalized prior knowledge (PPK)?
- To the best of our knowledge, this is *the first paper* presenting this concept of PPK in PFL. The introduction of PPK is to address the problem of overlooked information, and this is the first time this problem has been formally discussed in PFL. For space reason, we can discuss this in the discussion period.
4. [Q3] Why X-family? More discussion.
- In order to expand the discussion of the main problem in this paper, we use the *X-family prior assumption*, for its *broad coverage* in both practice and theoretical analysis. According to the *relationship between X-family and B-Div*, as shown in Equation 3 and more in Line 601-612 in Appendix, the introduction of the B-Div brings about the properties of *easy calculation*, e.g., the first-order moment estimation point is the expected point with the closest Bregman distance (i.e. B-Div), in Line 507-509 in Appendix. the properties bring many *closed-form solutions that can be computed by replacing numerical approximations*, e.g., the gradient of Bregman-Moreau envelope, the simpler form of both the Fenchel conjugate duality and transformation of the natural and expected parameters of an X-family. This assumption makes the problem well-computable in Line 127-139 with optimization framework, and the computation method mirror descent is introduced with theoretical support [39]. If there is anything unclear in our responses, we'd like to discuss in the discussion period.
5. [Q4] The intuitive logic of the equations in Section 4, as follows [Please see the motivation explanation in [general responses](https://openreview.net/forum?id=kuxu4lCRr5¬eId=WzbbMONDyW)]:
- Math. modeling. [Eq. 5]
- Implicit complete information $\Theta_{i}$ is *not directly computable*. [Eq. 6]
- Expectation maximization (EM) *makes it computable* and the prior explicit. [Eq. 7-8]
- The optimization method Mirror Descent (MD) is used *to compute the EM problem*. [Eq. 9]
- Relaxing the constrains of MD so that there is *room for personalization*. [Eq. 10-11]
6. [Q5] The relation between Sec. 4 and 5.
- Section 4 is about *optimization* methodology, and Section 5 is about *computation* framework. The latter is a practical implementation of the former.
7. [Q6] What does the *aforementioned motivation* in Line 156 stand for?
- Typical motivation for a computation framework is *the modeled optimization problem*, which is in Line 148-153, the methodology in Section 4.
8. [Q7] Why is *gradient-based methods* mentioned?
- We did *not fix* exactly how to solve the bi-level optimization problem *until Section 5*. There are many computation methods for the optimization problem. In Section 5, we only employ gradient-based method, and other methods are also considerable, e.g., Newton, evolutionary, MCMC-Bayesian or other Approximate-Bayesian sampling-based computation methods discussed in Line 540-570 in Appendix.
9. [Q8] The insightful conclusion of Table 1.
- Table 1 is for comparison *in Line 241-262*, we provide insights from 3 perspectives, including convex or non-convex problems, easy or hard tasks, and text tasks. The most insightful one we think is *in Line 257-262*: the overlooked information makes the specific model on each client have to re-obtain this information from scratch solely from the data during training. Table 1 provides an overall evaluation and the tasks our method specializes in as claimed *in Line 65-67*.
10. [Q9] What is the relation between "RMD" and "mh"? Are any experiments relevant to "RMD"?
- RMD is a class of strategies determined by $\Phi$.
- With given $\Phi = f_{i}+F_{i}$, mh is an implementation of RMD, or a strateiges in the class of strategies RMD.
- Any experiments about mh are the experiments relevant to RMD.
***Minors***:
We will carefully check the paper to correct similar issues.
1. Yes, that should be *an*.
2. Technical issue. We fix it.
3. The sentence should be *...on classic FL, use EM...*. [missed comma]
4. KL-divergence will be elaborated.
5. We will make it better for reading.
***References***:
[f] Variational Autoencoder Based Anomaly Detection Using Reconstruction Probability. Special lecture on IE 2.1 (2015): 1-18.
[g] A geometric View of Conjugate Priors. ML 81 (2010): 99-113.
---
Rebuttal Comment 1.1:
Comment: Thanks for the detailed response and patient explanation. I will improve my rating but I need more time to consider everything on this page. I will give an additional comment later.
---
Reply to Comment 1.1.1:
Title: Response to the Official Comment
Comment: Thanks for participating actively in the discussion and we sincerely appreciate the time and effort the reviewer JezB put into this work. We are delighted that our responses address the concerns and are perceived positively.
Please feel free to engage in discussion with us during the feedback phase, as we are eager to receive constructive suggestions to enhance our work. | Summary: The authors of the paper consider the problem of personalized federated learning. The main problem the authors attempt to tackle is the information on sampling of clients being overlooked. Specifically, they attempt to introduce two major steps in the training of personalized models: the first one is the injection of personalized prior knowledge into the global model before training, the second is the extraction of the prior before sending the global model updates. The authors formally present the problem of overlooking client-sampling information, then they utilize a framework for the problem called pFedBreD. Additionally, they provide extensive theoretical examination of the framework, as well as numerous experiments to validate the superior performance of their solution compared to a handful of baseline algorithms.
Strengths: - The paper is very well-written. It is clear and comprehensive, and its structure is sound.
- The authors motivate the problem very well, and provide a sound formulation of it.
- The theoretical examination of the problem, especially the results are well-presented.
- The extensive experimental results, especially the comparisons with various baselines are useful for the completeness of the evaluation of the proposed framework. A number of datasets as well as different models are considered. Moreover, the analysis of the results is insightful.
Weaknesses: I have a couple of comments on the weaknesses of the paper:
- I think the main body of the paper is well-written as I mentioned in the weaknesses, but I also think it lacks fundamental parts that can be added to improve the readability of the paper. For example, the pFedBreD algorithm is not included in the main body but it is an important part that needs to be presented in the main body of the paper for completeness of the presentation. Similarly, the authors refer to Theorem 1 in the paper but it is not mentioned until the Appendix D. The paper should be self sufficient for the reader.
- The authors use acronyms in the paper before mentioning what they are, for example the use of **mh**, **lg**, and **meg** before explicitly defining them or mentioning what they stand for. This can be confusing for the reader and needs to be rectified.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: The mentioned weaknesses need to be addressed. The main body of the paper should be improved to include the missing parts as well as the acronyms should be clearly defined.
Further, the paper needs to be checked for typos. For example: line 85, "pFedBreD is a expectation" -> "pFedBreD is an expectation", line 95 "Bregman divergencee" -> "Bregman divergence", line 213: "we choose following" -> "we choose the following".
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: The authors present the limitations of their work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ***Comment***:
- We sincerely thank the reviewer qY3t for the appreciation and the constructive suggestion to further improve clarity and readability.
***Responses***:
1. [W1, Q1] Suggestions for improving readability.
- Thanks for the constructive suggestion for readability, we will do the following modifications in the revision:
- The pFedBreD algorithm in Line 527-530 in Appendix A.5 will be put between Line 169 and Line 170 in the main body of the paper from the appendix.
- The Theorems in Appendix D.1 will be put in Section 5 before Remark 1 in the main body.
- Other modifications for readability are listed as follows: (Thanks for all the suggestions from the reviewers)
- All the full names of the acronyms will be placed before or around the acronyms. [W2, which will be discussed in details later]
- The motivation and framework illustration, Fig. 1, will be more visually linked to the various relevant parts of the math. methodology and computation framework. [Suggested by reviewer [Q28a](https://openreview.net/forum?id=kuxu4lCRr5¬eId=uuT3KKlAUX)]
- We will add a subsection to formalize the overlooked information problem, focusing on the followings: [Suggested by reviewer [jGgs](https://openreview.net/forum?id=kuxu4lCRr5¬eId=XWZD1FWgxw)]
- **The core of the problem**: global model $w=E_{i}w_{i}=E_{i}w_{i}|i \Rightarrow$ Mutual Information $I(w;i)=0$.
- **Effects**: global model *has no mutual information with client-sampling $i$*, as shown in the above equations and discussed in Line 120-124, in particular when applying regularization $R(w^{(t)};...)$ or local initialization $w_{i,0}^{(t)}\leftarrow w^{(t)}$ where $w_{i}$ is the local model on the $i^{th}$ client. This makes the specific model on each client have to re-obtain this information from scratch solely from the data during training which is particularly distinctive on text tasks as the analysis in Line 259-262, and especially impacted on hard-to-learn representations and datasets.
2. [W2, Q1] The unclearly defined parts of acronyms.
- Thanks for pointing out. All the full names of the acronyms will be placed before or around the acronyms. *In Line 634-638 in the Appendix*, we give the definitions and reasons about the full names of the acronyms. The three implementations of $\mu_{i}$, i.e. lg, meg and mh, represent *loss gradient*, *memorized envelope gradient* and *memorized hybrid* respectively. *Memorized* means that we choose the gradient of Bregman-Moreau envelope $\nabla F_{i}(w_{i,r-1}^{(t)})$ as $\eta[w_{i,R}^{(t-1)} - \theta_{i,r-1}^{(t)}]$, where $\eta \ge 0$ is a step-size-like hyper-parameter. Each local client memorizes their own local part of the latest global model $w^{(t)}$ at the last global epochs $w_{i,R}^{(t-1)}$, instead of $w_{i,r-1}^{(t)}$ in practice.
***Minors***:
1. [Q2] Typos.
- Thanks for pointing out, and we will fix all typos in the revision.
---
Rebuttal Comment 1.1:
Comment: I have read the other reviews and the detailed responses by the authors. I thank the authors for addressing my comments, and other reviewers', accordingly, and highlighting such revisions in the response.
---
Reply to Comment 1.1.1:
Title: Response to the Official Comment
Comment: We would like to express our heartfelt appreciation to the reviewer qY3t for the constructive comments and acknowledgment in reviewing and improving our work. | Summary: This paper proposes pFedBreD, which decouples the personalized prior from the local objective function regularized by Bregman divergence for greater adaptability in personalized FL. Extensive experiments validate the effectiveness of the proposed method on 5 datasets.
Strengths: S1. The problem of overlooking client-sampling information at prior knowledge being transferred is important.
S2. A novel framework, pFedBreD, is proposed for computing the Bayesian optimization problem. The theorem is provided to analyze the convergence.
S3. The experiments validate the effectiveness of pFedBreD on five widely-used datasets.
Weaknesses: W1. The problem is not well-motivated. To be more specific, the shortcomings of existing PFL methods are not well explained. The client-sampling information should be clearly illustrated, and the reason why the prior knowledge extraction is challenging is missing.
W2. The problem should be formally defined. It would be better to add a subsection to provide the statement of the research problem of this paper.
W3. The efficiency of the proposed methods is not well analyzed.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: Q1. The assumption this paper makes is the uniform sampling of clients. How about other sampling methods? Is the proposed method applicable to other sampling scenarios?
Q2. In Table 1, the accuracy of FedAMP is better than the proposed method on some datasets. In particular, the accuracy of the MCLR of FedAMP on different datasets is close to or even outperforms mh. The reason should be provided.
Q3. How to determine the number of global/local epochs in your experiment setting?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: The authors adequately pointed out the limitations of this work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ***Comment***:
- We sincerely thank the reviewer jGgs for appreciation recognizing the importance of the problem we introduced and suggestion for improving readability. We address the concern raised by the reviewer about motivations and challenges, and we answer these questions as follows.
***Responses***:
1. [W1] More explanation about motivations.
- Thank you for pointing out. We provide further explanation about the existing shortcomings in PFL that this paper is focusing on, and the two major shortcomings are as follows [Please see [general responses](https://openreview.net/forum?id=kuxu4lCRr5¬eId=WzbbMONDyW) for more about motivations]:
1. **What is exactly the effects of overlooked client-sampling information?**: the salient shortcoming of this overlooking is that *the specific global knowledege for each client is the same*, making the specific model on each client have to re-obtain the local information from scratch solely from the data during training which is particularly distinctive in text tasks as analyzed in Line 259-262. These are especially important on hard-to-learn representations and datasets.
2. **Why is the prior knowledge *explicit* extraction challenging?**: the main purpose of the explicit expressions in most classical methods is for parameterization, computational convenience, and closed-form solutions, e.g., variational inference for re-parameterization techniques, conjugate prior in Bayesian methods. In this paper, under implicit prior assumption, the complete information is *not parameterized and computationally expensive* in Line 125-134. The substitute is to tune this information by the way of regularization, representation or loss function design. Under explicit prior assumption, if all the parameters are given, the prior distribution is *completely determined*, and the local information that comes after that, *comes exclusively from the local data*. Thus, we can design theoretical-supported strategies directly with parameters.
2. [W2] Formal definition and suggestion about additional subsection.
- Thanks for the suggestion about a subsection about motivation to improve reading with formal definition of the overlooked information problem. We consider adding sections focusing on the followings: [Welcome to discuss in detail during the discussion period.]
- **The core of the problem**: global model $w=E_{i}w_{i}=E_{i}w_{i}|i \Rightarrow$ Mutual Information $I(w;i)=0$.
- **Effects**: global model *has no mutual information with client-sampling $i$*, as shown in the above equations and discussed in Line 120-124, in particular when applying regularization $R(w^{(t)};...)$ or local initialization $w_{i,0}^{(t)}\leftarrow w^{(t)}$ where $w_{i}$ is the local model on the $i^{th}$ client. This makes the specific model on each client have to re-obtain this information from scratch solely from the data during training which is particularly distinctive in text tasks as the analysis in Line 259-262.
3. [W3] More analysis about efficiency.
- Thanks for suggestion, and we will discuss more about efficiency as follows and in revision. Please see [Q28a rebuttal Q3](https://openreview.net/forum?id=kuxu4lCRr5¬eId=KKUTOVP2PR) for more details, and we'd like to provide more during the discussion period:
- We discuss *efficiency* in terms of time and space complexity. The complexity comparison is shown below, which is *in Appendix A and the last column in Table 2*. Our method *share the same complexity* as pFedMe and FedAMP, where $N$,$T$,$R$ and $K$ are respectively the number of clients, global epochs, local epochs and proximal solver iterations. *As shown below and in Table 2*, the local computation only adds limited computation with significant improvement.
|Complexity/Methods|FedAvg|pFedMe|Per-FedAvg|FedAMP|pFedBreD (ours)|
|-|-|-|-|-|-|
|Entire Sys. Memory|$O(N)$|$O(N)$|$O(N)$|$O(N)$|$O(N)$|
|Entire Sys. Time|$O(NTR)$|$O(NTRK)$|$O(NTR)$|$O(NTRK)$|$O(NTRK)$|
|Additional Local Computation beyond FedAvg|-|Addition|Gradient|Weighted Sum and L2-Distance|Gradient and Addition|
|Average Acc. on All Tasks|72.80|79.48|80.02|81.80|83.09|
4. [Q1] How about other sampling scenarios?
- The proposed method *is applicable to any scenarios* with a known or given sampling distribution if the expectation exists, as we mentioned *in Line 188-189* in Convergence Analysis (CA). We analyze on a *uniform* client sampling $E_{i}=\frac{1}{N}\sum_{i=1}^{N}$ setting for simplification. *Other sampling methods can be obtained* with client sampling expectation $E_{i} [F_{i}] = F$, by changing sampling weights. In the CA, we use the uniform sampling *only for simplification* of the theorem, and only the linearity of operator $\frac{1}{N}\sum_{i=1}^{N}$ in convexity, Jensen inequality and gradient calculation, which could be replaced by linear $E_{i}$.
5. [Q2] Analysis about FedAMP.
- We do provide an analysis about FedAMP *in Line 242-251*, which is fully consistent with our experimental results, and the main analysis provided are as follows:
- *On convex problem, FedAMP outperforms our method ... One possible reason ... FedAMP uses the distance between models as a similarity in the penalty point selection ... since there is only one global optimum, this penalty point tends not to change ... a non-dynamic regular term ... will not be as advantageous for non-convex problems ... as penalty point tends to fall into the local optimum ...*
6. [Q3] The number of global/local epochs.
- The number of global/local epochs $T$/$R$ are shown below and in Line 224. Empirically, different tasks require different numbers of training epochs to reach convergence, and it's recommended that smaller $R$ if global stability is wanted, and larger if personalization and convergence speed are wanted.
| Dataset-Model| CIFAR-10 | Sent140-LSTM | FEMNIST-MCLR/DNN | FMNIST-MCLR/DNN | MNIST-MCLR/-DNN |
|-|-|-|-|-|-|
|Local Epochs $R$|20|20|20|20|20|
|Global Epochs $T$|2000|800|800|200|200|
---
Rebuttal Comment 1.1:
Comment: Thanks for the response. I believe that the authors have addressed all my concerns and I will keep the score.
---
Reply to Comment 1.1.1:
Title: Response to the Official Comment
Comment: We would like to express our sincere gratitude to the reviewer for endorsing our work and providing constructive suggestion. | Summary: In this paper, the authors propose pFedBreD to decouple prior knowledge from each client. pFedBreD extracts the personalized prior with Bregman Divergence for better performing personalized tasks. The authors provide convergence analysis and experiments evaluated on 5 datasets.
Strengths: 1. The authors give a detailed convergence analysis for the proposed method.
2. The theoretical derivation of the paper is comprehensive in the Appendix.
3. The authors provided extensive experiments for different tasks and models and the results show the effectiveness of improving the local models' performance.
Weaknesses: 1. The motivation of the work is not clearly presented. It is hard to understand why the authors use Bregman Divergence and Relaxing Mirror Descent.
2. It is confusing that the authors use the average results and the standard deviation of them on all tasks to represent the overall performance of methods. Could the authors give some intuition or explain the purpose of this?
3. More related works need to be introduced. And we'd like to see experimental results compared to more related works, such as [1], [2], [3], and [4].
4. This paper focuses on the specific label distribution problem. It would be better if the authors could discuss the performance of other heterogeneous data distribution problems (e.g., different number of classes is different).
[1] Personalized Federated Learning with Feature Alignment and Classifier Collaboration. ICLR 2023.
[2] Ditto: Fair and robust federated learning through personalization. ICML 2021.
[3] Personalized Federated Learning with First Order Model Optimization. ICLR 2021.
[4] Personalized federated learning using hypernetworks. ICML 2021.
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: 1. The intuition of the work is not clear. Could the authors explain the motivation for using Bregman Divergence and Relaxing Mirror Descent in detail?
2. In Table 1, why the second-best performance is 79.44 (mh) rather than 79.68 (Per-FedAvg+FT)?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 3 good
Contribution: 2 fair
Limitations: The authors have discussed the potential limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ***Comment***:
- We sincerely thank the reviewer HD18 for the appreciation and constructive comments. The reviewer raises concern about the motivation of using Bregman-Divergence (B-Div) and relaxing Mirror Descent (RMD), and we answer the questions one by one.
***Responses***:
1. [Q1, W1] The Motivation of B-Div and RMD.
- **B-Div**:
- **Modeling**: the main problem studied in this paper is the overlooked information of client-sampling. In order *to widen the range of the main problem math. modeling*, we use the exponential family (X-family) prior assumption, due to its broad coverage in both practice and theoretical analysis.
- **Calculation**: according to the relationship between X-family and B-Div, as shown in Equation 3 and more in Line 601-612 in Appendix, the introduction of the B-Div brings about *the properties of easy calculation*, e.g., the first-order moment estimation point is the expected point with the closest Bregman distance (i.e., B-Div), in Line 507-509 in Appendix.
- **Computation**: these properties bring many easy-compute *closed-form solutions* instead of numerical approximations, e.g., the gradient of Bregman-Moreau envelope, the simpler form of both the Fenchel conjugate duality and transformation of the natural and expected param. of an X-family.
- **RMD**:
- **Pre**:the introduction of Expectation Maximization (EM) and the X-family prior assumption in Line 124-134 transforms the optimization problem into an easy-compute one, from implicit into explicit assumption.
- **Mirror descent (MD)**: we use MD as the proposed computational framework with rigor, according to the relationship among EM, X-family and MD, i.e., EM under the X-family assumption is MD [39] in Line 142.
- **Relaxing**: we relax the constrains of MD *provides space for personalization* that MD would not otherwise have, in Line 140-148.
2. [W2] Why do we put the avg. and the std. on all tasks in Tables?
- Thanks for pointing out. Since we have a lot of benchmarks and *for ease-to-read*, to catch the overall results of the comparison, we provide the additional columns to provide statistics of the performance and stability. It's better to *differentiate between hard tasks and others*.
3. [W3] *More related works and comparison*.
- Thanks for the constructive suggestions. We add the comparison with the methods [a-d] in the revision. We provide the results in [*GENERAL RESPONSES*](forum?id=kuxu4lCRr5¬eId=WzbbMONDyW), and analysis as follows:
- **Results Sum.**:
- In comparison, as we mentioned in Line 65-67, our method still shows great accuracy and robustness to aggregation ratio, that our accuracy surpass the baselines at least 2.7/0.5/0.8 on three hard tasks and the avg. decrease is only 0.37 comparing to others about 0.6-1.5.
- In the additional experiments, FedPAC[a], the FA of which is the main module pull representations to the global feature centroid, is effective, but is not specifically designed for the overlooked client-sampling information, so it performs relatively not well on some scenarios, e.g., low-aggregation-rate settings, regression tasks, text tasks.
- **Analysis**:
- Our method with personalized prior does not have to re-obtain all local information from scratch solely from the data during training. These are especially important on hard-to-learn representations and datasets.
- The FA [a] might be affected by the aggregation noise from low ratio, due to the noise makes the global feature centroid [a], unstable and failed.
- For example, our method design is crucial for Sent140 from LEAF [11]. The task on Sent140 is a regression task (or binary cls.) on text dataset with heterogeneous input data obtained from social software. The landscape of rep. on text task is more rugged [53, 16, 13], and the heterogeneous inputs makes it harder.
4. [W4] This paper focuses on the specific label distribution problem. How about others?
- **Clarification**:
- Theoretically:
Our modeling about data $(x_i,y_i)$ in Line 113-119, *the pairs of input and label respectively in dataset $d_i$ on the $i^{th}$ client*, does not depend on a specific labeling distribution. We do not constraint the label in the methodology, framework and convergence analysis.
- Experiments:
In order to verify the effectiveness of pFedBreD on different distributions, we conduct the following experiments as shown in the following table. There are 4 data partition methods, shown in Appendix C.5, and the datasets of our experiments are not on specific label distribution, as follows:
|Datasets|FEMNIST|Sent140|FMNIST/MNIST/CIFAR-10|$\alpha$-FMNIST|
|-|-|-|-|-|
|Partition Methods|Uniform and dominant class [a, 11]|Heterogeneous input data [67, 11]|Rotation allocation|$\alpha$-partition [28] in Table 4|
|Our performance (Acc.)|70.3|73.7|99.0/93.0/80.6|Avg. 34.8|
|The best performances among our baselines|66.8|71.3| 98.7/92.2/79.7|Avg. 34.4|
***Minors***:
1. [Q2] The second-best should be 79.68 (Per-FedAvg+FT).
- Thanks for pointing out, and we will fix it and carefully proof reading of our paper in the revision.
***References***:
[a] Personalized Federated Learning with Feature Alignment and Classifier Collaboration. ICLR 2023.
[b] Ditto: Fair and Robust Federated Learning Through Personalization. ICML 2021.
[c] Personalized Federated Learning with First Order Model Optimization. ICLR 2021.
[d] Personalized federated learning using hypernetworks. ICML 2021.
---
Rebuttal Comment 1.1:
Comment: Thanks to the authors for the response. All my concerns have been addressed. I will raise my score.
---
Reply to Comment 1.1.1:
Title: Response to the Official Comment
Comment: We would like to sincerely thank reviewer HD18 for the time and effort he put into this work. We are pleased that our responses address the concerns and have been positively regarded. | Rebuttal 1:
Rebuttal: # General Responses:
We begin by making the following responses about the structure of our paper and results of additional experiments, and later to allow more space for responses to each author.
- More intuitive and high-level line of logic, which may help understanding and reading, are as follows [(j), the related section j]:
- **Practical Problem (1): overlooked client-sampling information problem $\rightarrow$ Math. Problem Modeling (4): Bayesian modeling global problem as Maximize Likelihood Estimation (MLE) $\rightarrow$ Optimization Problem Modeling and Framework (4 & 4.1): Introduction of incomplete information and bi-level optimization with Expectation Maximization (EM) $\rightarrow$ Optional Implemental Strategies (4.2): Relaxing mirror descent $\rightarrow$ Computation Problem Formulation and Framework (5.1 & 5.2): Maximum A-Posteriori Estimation (MAP) as local problem and first-order methods $\rightarrow$ Implemental Principle and Computable Algorithms (5.3): Maximum entropy rule and meta step $\rightarrow$ Convergence Analysis (5.4): Bounded errors $\rightarrow$ Numerical Experiments and Analyses (6): 5 analyses for evaluation.**
- Detailed motivation logic is shown as follows:
1. **Background**: we identify the problem of missing information in prior knowledge in *a single global model* and propose the concept, personalized prior, for this problem in Section 1. Formal discussion, global model $w=E_{i}w_{i}=E_{i}w_{i}|i \Rightarrow$ mutual information (MI) $I(w;i)=0$, is in Line 119-124.
[From a Bayesian and info. perspective, the global knowledge transferred in conventional method with single global model *has no MI with client-sampling $i$*, in particular when applying reg. $R(w^{(t)};...)$ or local init. $w_{i,0}^{(t)}\leftarrow w^{(t)}$ where $w_{i}$ is the local model on the $i^{th}$ client. This makes the specific model on each client have to re-obtain this information from scratch solely from the data during training, especially impacted on hard-to-learn representations and datasets.]
2. **Modeling**: we turn this practical problem into a math. one through Bayesian modeling.
3. **Main Assumption**: *under X-family prior assumption for Bayesian modeling*, Bregman Divergence is introduced, in order to *widen the range of the theory and simplify the computation*.
4. **Optim. Prob. and Framework**: based on EM, a bi-level optim. prob. and framework. pFedBreD is proposed.
5. **Implemental Strategies**: a class of strategies called *RMD is proposed as a class of optional implementations* of pFedBreD for personalization.
6. **Comp. Prob. and Framework**: maximum entropy rule and first-order method for computation is employed to implement the framework as pFedBreD$_{ns}$.
7. **Experiment Results**: our methods with meta-step strategies reaches the SOTA on 8 public benchmarks and are especially great of our hybrid methods pFedBreD$_{ns, mh}$ *in hard tasks with small aggregation ratios and non-convex local objective settings*.
8. **Further Impacts**:
- **Overall**, we introduce the problem of information overlooking during global knowledge transfer in conventional FL.
- **Theoretically**, our work provides a Bayesian explanation for regularization-based FL methods, modeling and convergence analysis for most of the methods that can be generalized to Bregman regularization.
- **Practically**, our algorithm gives a new class of the SOTA methods and provides validation from more than 5 different perspectives to demonstrate the effectiveness of PFL in Section 6.2.
- More comparison the reviewer HD18 interested in:
- We compare the test accuracy between our method and the baselines recommended by reviewer HD18 on hard tasks mentioned in the paper as shown below. The settings are the same as the ones in the paper. We provide an extra column about average decrease for overall comparison about robustness to the noise caused by lowering aggregation ratio. (Note that *lower the aggregation ratio*, *larger the aggregation noise*.)
|Methods/Datasets & Models|FEMNIST & DNN|CIFAR-10 & CNN|Sent140 & LSTM|Average Decrease by Noise|
|:-|-|-|-|:-:|
|**Aggregation Ratio**|**10% $\rightarrow$ 5%**|**20% $\rightarrow$ 10%**|**40% $\rightarrow$ 20%**| - |
|FedPAC[a]|62.2 $\rightarrow$ 60.7|78.9 $\rightarrow$ 77.3|68.1 $\rightarrow$ 66.8|1.5|
|FedHN[d]|61.1 $\rightarrow$ 59.6|77.5 $\rightarrow$ 76.9|71.2 $\rightarrow$ 70.1|1.1|
|Fedfomo[c]|60.1 $\rightarrow$ 58.9|71.4 $\rightarrow$ 70.6 |70.1 $\rightarrow$ 68.9|1.1|
|Ditto[b]|52.9 $\rightarrow$ 52.2|72.4 $\rightarrow$ 72.1|71.0 $\rightarrow$ 70.3| 0.6 |
|mh (ours)|**64.9** $\rightarrow$ **64.3**|**79.4** $\rightarrow$ **79.1**|**72.0** $\rightarrow$ **71.8**| **0.37** |
- Modifications for further improvement of readability from constructive suggestions. More about this are welcomed to discuss in discussion period, and we consider the followings:
- The pFedBreD alg. in Appendix A.5 will be put between Line 169 and Line 170 in the main body of the paper from the appendix. [[qY3t](forum?id=kuxu4lCRr5¬eId=KNVaTFMM8K)]
- The Theorems in Appendix D.1 will be put in Section 5 before Remark 1. [[qY3t](forum?id=kuxu4lCRr5¬eId=KNVaTFMM8K)]
- All the full names of the acronyms will be placed before or around the acronyms. [[qY3t](forum?id=kuxu4lCRr5¬eId=KNVaTFMM8K)]
- The Fig. 1 will be more visually linked to the various relevant parts of the math. methodology and comp. framework. [[Q28a](forum?id=kuxu4lCRr5¬eId=uuT3KKlAUX)]
- We will add a subsection to formalize the overlooked information problem. [[jGgs](forum?id=kuxu4lCRr5¬eId=XWZD1FWgxw)]
**References**:
[a] Personalized Federated Learning with Feature Alignment and Classifier Collaboration. ICLR 2023.
[b] Ditto: Fair and robust federated learning through personalization. ICML 2021.
[c] Personalized Federated Learning with First Order Model Optimization. ICLR 2021.
[d] Personalized federated learning using hypernetworks. ICML 2021. | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: The paper introduces a novel scheme, pFedBreD, which addresses the incomplete information challenge in personalized federated learning by incorporating personalized prior knowledge into the global model of each client. This is achieved by decoupling the personalized prior from the local objective function and applying Bregman divergence regularization. The proposed approach leverages the Expectation-Maximization (EM) algorithm to approximate complete information from both global and local clients. Extensive empirical evaluations on multiple benchmarks demonstrate the effectiveness of the proposed method, and a theoretical analysis is provided to support its theoretical foundations.
Strengths: 1. The paper is well-written and easy to read.
2. This paper introduces a novel approach that incorporates Bregman divergence and leverages the Expectation-Maximization (EM) algorithm to approximate complete information from both global and local clients. It effectively enhance personalized federated learning.
3. The effectiveness of the proposed method is validated through extensive empirical evaluations and thorough theoretical analysis in the paper.
4. This work showcases superior performance across multiple benchmarks when compared to various baseline methods.
Weaknesses: 1. The authors discuss the concepts of prior knowledge injection and extraction in Figure 1, as well as analyze the process of information injection and extraction. To enhance clarity, it would be helpful if the authors explicitly indicate in the framework design section which step corresponds to these specific actions.
2. Certain aspects of the paper would benefit from additional clarifications to enhance understanding. See below.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: 1. Is the method of calculating GCE mentioned in the paper? Does it depend on model parameters or extracted feature maps?
2. Some abbreviation is not clearly explained in the paper. For example, what is MCLR, FT, AM, etc.?
3. How does the computation cost of the proposed method compare to other approaches? Is the local training phase highly time-consuming? For example, what computational resources were utilized in the experiment, and what was the duration required to complete the experiment?
4. Does the scalability of the proposed method remain unaffected? For instance, can the proposed method handle a scenario with more than 100 clients?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 2 fair
Limitations: The authors have adequately addressed the limitations in the paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ***Comment***:
- We sincerely thank the reviewer Q28a for the appreciation and constructive comments to further improve clarity and readability, which are detailed in [general responses](https://openreview.net/forum?id=kuxu4lCRr5¬eId=WzbbMONDyW)[W1]. The concern raised by the reviewer, i.e., the further details of our experiments, most of which are in the Appendix, and we answer the reviewer's questions one by one in the hope that our response can address the concern and any questions asked.
***Responses***:
1. [W1] Modifications for improving readability.
Thanks for the suggestion, and we will make the followings based on the suggestion. Specifically, the motivation and framework illustration, Fig. 1, will be more visually linked to the relevant parts of the math. methodology and computation framework. We will put the algorithm 1 in Appendix into the main body and highlight the relevant parts in the algorithm box with statements.
2. [Q1, W2] How do we calculate the Generalized Coherence Estimate (GCE)?
- **What is GCE?** GCE is a classic method [e, 24] in the field of signal processing used to measure the coherence among a batch of sources. This method is mentioned by citing reference [e, 24] in the paper. Formally, $GCE(X) := 1 - \text{det}(\hat{G}(X))$, where $\text{det}(\hat{G}(X))$ is the determinant of the Gram Matrix (Gramian) of $X$. The Gram Matrix consists of the cosine values of each pair of sources in the set $X$. *More details will be included* in the next version of the paper.
- **How to calculate GCE in this paper**: The GCE in Fig. 2 is $GCE(\{\nabla F_{i}(w_{i})\}_{i})$, as shown in the title of Fig. 2. The GCE analysis is on the first-order directions among each client, which are the gradients of the local objective function. The directions determine the update direction of the global model through aggregation, so our analysis depends on these directions.
3. [Q3, W2] What about computation cost?
- Computation cost on both of the local training and entire algorithm *are not highly time-consuming*, see details as follows:
- **Entire Alg. Complexity**: the complexity comparison among our baselines in Line 572-578 in Appendix is detailed in Table below, where $N$,$T$,$R$,$K$ and $M$ are respectively the number of clients, global epochs, local epochs, proximal solver iterations and components on each client. Our method *share the same complexity* with pFedMe and FedAMP.
- **Local Computation Complexity**: local computation cost is independent of the number of clients $N$. As shown in Table below and the last column of Table 2 in Line 269-271 of the paper, the local computation *only adds constant time* about $N$. Comparing with pFedMe and FedAMP, additional local computation is at most *one extra local gradient and addition computation* for our MAML-based personalized prior strategies. In practice, the local proximal solver iterations *$K$ is not too large*, e.g., $K=5$ in our settings.
- Main details about Computation Environment for our computation and test:
- GPU: TitanX with 1417 Hz frequency, 12288MB mem. 480GB/s of bandwidth and 11 TFLOPs;
- System: Ubuntu 20.04.1;
- Torch: 1.11.0+cu102.
[It can be seen that our method improves significantly without great costs. For ease-to-read, we sort by test performance.]
|Complexity/Methods|FedEM|FedAvg|pFedMe|Per-FedAvg|FedAMP|pFedBreD (ours)|
|-|-|-|-|-|-|-|
|Entire Sys. Memory|$O(NM)$|$O(N)$|$O(N)$|$O(N)$|$O(N)$|$O(N)$|
|Entire Sys. Time|$O(NTRM)$|$O(NTR)$|$O(NTRK)$|$O(NTR)$|$O(NTRK)$|$O(NTRK)$|
|Local Computation Comparing to classic FedAvg|Each Local Loss and Weighted Sum|-|Addition|Gradient|Weighted Sum and L2-Distance|Gradient and Addition|
|Time Cost of Each Global Epoch with $R=20$ on FMNIST-DNN with TITANX (sec.)|2.7|1.9|3.1|3.3|3.1|3.6|
|Average Acc. on All Tasks|71.88|72.80|79.48|80.02|81.80|83.09|
4. [Q4, W2] Does the scalability of our method remain unaffected, e.g., the case with more than 100 clients?
- **In our Experiments**: the numbers of clients in the settings of our experiments *are above 100, as shown in Line 228*. The actual time comparison is shown in the Table above in Response 3 [Q3, W2], and the scalability about *algorithm complexity* of the proposed method *is affected linearly* by the number of clients $N$ (linear complexity, an acceptable system expansion cost.).
- **Theoretically**: according to Theorem 1 in Line 711 in Appendix, we have the linear convergence rate $O(\frac{\hat{S}^{-1}-1}{NT})$, if the client sampling ratio $\hat{S}\le 1$ is fixed, and the convergence rate will *still increase linearly* by $N$.
- **Conclusion**: the scalability about *both algorithm complexity and convergence* of the proposed method is affected linearly (acceptable linear complexity) by $N$.
***Minors***:
1. [Q2] Some abbreviation is not clearly explained in the paper. For example, what is MCLR, FT, AM, etc.?
- Thanks for pointing out. They are in the Appendix C1 and C4, and for readability, we will put them in the main body of the revision as suggested.
***References***:
[e] An Invariance Property of the Generalized Coherence Estimate. TSP 45.4 (1997): 1065-1067.
---
Rebuttal Comment 1.1:
Comment: Thank you for your detailed response and all my questions are clearly addressed.
Having reviewed the general responses and discussions from other reviewers, I will raise my score.
---
Reply to Comment 1.1.1:
Title: Response to the Official Comment
Comment: Thanks for the positive feedback and acknowledgment. Feel free to discuss with us and we will carefully address concerns that may be raised during the discussion phase. In the meantime, we will continue to polish our work based on the feedback from all the reviewers. Once again, thanks for the time and efforts on this work. | null | null | null | null | null | null |
Computing Optimal Nash Equilibria in Multiplayer Games | Accept (poster) | Summary: This paper addresses the problem of computing optimal Nash Equilibria in multi-player games. The authors present an optimization algorithm that uses a correlation plan-based formulation with a suitable convex relaxation in order to compute an optimal NE. At its core, the algorithm relies on a binary-tree structure of the correlation plans that enables the adoption of bilinear constraints to represent the set of NE. Finally, the authors experimentally evaluate their algorithm against different baselines for computing optimal NE.
Strengths: The main strength of this work is that the experimental evaluation shows promising results with respect to the state-of-the-art algorithms evaluated by the authors.
Moreover, for what concerns me, the theoretical claims are sound.
Weaknesses: The aspect of this paper that concerns me the most is the contribution and significance of the presented results. I do not find it surprising that you can use correlation plans to compute a NE, and the algorithm formulation seems to me quite straightforward.
Furthermore, while the experimental evaluation shows that in the benchmarks used, the algorithm outperforms existent baselines, I find the games used as benchmarks quite small. This, in my opinion, contributes to limiting the significance of the present paper.
Overall, I don't think that at its current state, this work is suitable for a venue like NeurIPS
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: I would like the authors to address my concerns on the significance of the paper.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 2 fair
Contribution: 2 fair
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you very much for your comments.
**Q**: About the contribution, significance, and small benchmarks
**Answer**: 1. To our knowledge, as we discussed in the related work section, the approach of breaking the strategy space into bilinear correlation plans based on a binary-tree structure and then heuristically reducing the feasible space, as a path to computing optimal NEs using bilinear programming is novel. We believe the heuristics that are applied over this structure are also good original contributions: There are no existing works that compute optimal NEs in multiplayer games by exploiting correlation plans with their relations based on a binary-tree structure to strictly reduce the feasible solution space after the convex relaxation while minimizing the number of correlation plans to reduce the number of bilinear terms.\
2. Based on our general transformation framework, the straightforward approach is using the vanilla binary collection, but our CRM can generate a minimum binary collection, which dramatically reduces time complexity, i.e., getting rid of a term $2^n$ for a term n log n. The improvement on the term $2^n$ in complexity is significant because, theoretically, it means that our proposed algorithm is significantly faster than the straightforward one. The reason is that our proposed algorithm requires significantly fewer bilinear terms than the straightforward one.\
3. Our experiment shows that our proposed algorithm is significantly faster than the baselines. For example, for solving the GAMUT game called Random graphical shown in Table 1, our algorithm (i.e., CRM) used about 0.1 seconds, but the straightforward algorithm (i.e., MIBP) used more than 800 seconds.\
4. As we mentioned in the limitation section, we cannot handle extremely large games now because we are handling a very hard problem, and then it is unrealistic to expect that our exact algorithm CRM could run very fast in large games. Our algorithm is an attempt to make this computation of optimal NEs feasible. \
5. Our algorithmic framework can be built on by further innovative heuristics to improve the computation of optimal NEs. For example, as shown in our additional experiment results (see our response to Reviewer PoVn), our algorithm CRM can solve large-scale real-world games with the aid of the PSRO framework, even when we cannot enumerate all actions due to the memory constraint.
---
Rebuttal Comment 1.1:
Comment: Thank you for the response.
After the rebuttal, I am more positive about the contribution brought by the paper.
---
Reply to Comment 1.1.1:
Comment: Thank you very much for reviewing our response and raising the score. We welcome new comments if you have any remaining uncertainties. | Summary: This papers studies computing Nash equilibrium in multiplayer games that optimizes a given objective function. The designed solving framework first transform the corresponding multilinear program into a bilinear program by introducing auxiliary variables representing probability distribution over players' joint actions. Then the feasible solution space after the convex relaxation of bilinear terms can be reduced. Time complexity of the proposed algorithm is shown to be reduced then SOTA both theoretically and numerically.
Strengths: 1. The design of correlation plan to reduce the feasible solution space after relaxation is interesting.
2. The proposed approach is shown to be effectively by both concrete thereotical proofs and numerically experiments.
Weaknesses: 1. Although the time complexity of solving the optimal Nash is reduced, it is stiil exponential-time, which is not surprised.
2. The notations related to the correlation plan is complicated. Although the examples provide quite clear intuition, it would be better there is any way to simplify the notations.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: NA
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: NA
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you very much for your helpful comments.
**Q**: About “Although the time complexity of solving the optimal Nash is reduced, it is stiil exponential-time, which is not surprised.”\
**Answer**: Based on our general transformation framework, the straightforward approach is using the vanilla binary collection, but our CRM can generate a minimum binary collection to reduce time complexity. The improvement on the term $2^n$ is crucial because:\
1. Theoretically, it means that our proposed algorithm is significantly faster than the straightforward one. The reason is that our proposed algorithm requires significantly fewer bilinear terms than the straightforward one.\
2. Experimentally, our proposed algorithm is significantly faster than the baselines including the straightforward one, which validates our theoretical results. For example, for solving the GAMUT game called Random graphical shown in Table 1, our algorithm (i.e., CRM) used about 0.1 seconds, but the straightforward algorithm (i.e., MIBP) used more than 800 seconds. \
3. As we mentioned in the limitation section, we are handling a very hard problem, and it is unrealistic to expect that we could remove all exponential terms to obtain an extremely fast algorithm. Our algorithm is an attempt to make the computation of optimal NEs feasible, and our algorithmic framework can be built on by further innovative heuristics to improve the computation of optimal NEs. For example, as shown in our additional experiment results (see our response to Reviewer PoVn), our algorithm CRM can solve large-scale real-world games with the aid of the PSRO framework, even when we cannot enumerate all actions due to the memory constraint.
---
Rebuttal Comment 1.1:
Comment: Thanks for your response.
---
Reply to Comment 1.1.1:
Comment: Thank you once again for your positive review. | Summary: This paper tackles the challenge of computing a NE that optimizes some objective (e.g, social welfare). They present an algorithm that avoids the naive exponential blowup that occurs when trying to extend the two-player MILP for optimal NE to multiplayer optimal NE, by using relations between correlation plans to prune the feasible space that results after convex relaxation and reduce the number of bilinear terms. They present empirical evidence of the benefit of this approach.
Strengths: 1. The paper is well-written and the contributions made easy to understand.
2. The paper presents a state-of-the-art algorithm for computation of optimal NE in multiplayer games. This is an important problem to consider because it helps operationalize game theory in the real world (most multi-agent settings are not two-player zero-sum).
3. The paper contributes to a growing literature on cooperative AI (e.g., it might be useful to be able to compute an optimal equilibrium so that agents may be steered towards it).
Weaknesses: 1. Would be good to have shown a couple experiments using CRM as the meta-solver for PSRO (as the authors themselves suggest), and maybe compare using other meta-solvers for PSRO, just to see how the performances compared in those settings.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: As mentioned above, it would be good to have some experiments combining this algorithm with other techniques to see how it can be used to solve larger scale games.
It is slightly confusing to me the use of “left” and “right” in line 126 to order the children in some sense of the binary collection. While it is common to refer to “left child” and “right child” of a binary tree, it doesn’t seem especially relevant (could you point out where the ordering of “left” and “right” gets used) to future discussion.
I have read the rebuttal and my concerns have been sufficiently addressed.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 4 excellent
Limitations: Yes, the limitations have been addressed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you very much for your valuable comments.
**Q**: About experiments combining this algorithm with other techniques to see how it can be used to solve larger scale games.\
**Answer**: We conducted experiments on real-world network security games (Jain et al., 2011). In these games, the attacker begins at a source node and traverses by choosing path to one of his targets. The action space of the attacker thus consists of all possible paths. The police officers move independently, each occupying one of the edges of the network in an attempt to apprehend the attacker before he reaches his target. There are three players in these games. The edges of the network with L x W nodes are randomly generated. It is estimated that in a fully connected network with 20 nodes and 190 edges, the number of possible attacker paths is approximately $6.6^{18}$ (Jain et al., 2011).
Our experimental results show that: in games on the network with the 6 x 6 nodes, CRM fails to output the result as it is running out of memory, but our CRM with the aid of the PSRO framework (CRM is used as the meta solver) can solve these games within about 10 seconds. In significantly larger games on the network with 10 x 10 nodes, our CRM, with the aid of the PSRO framework, can solve these games within about 100 seconds. Therefore, our CRM can solve large-scale real-world games with the aid of the PSRO framework, even when we cannot enumerate all actions due to the memory constraint.
Jain, M., Korzhyk, D., Vaněk, O., Conitzer, V., Pěchouček, M. and Tambe, M., 2011, May. A double oracle algorithm for zero-sum security games on graphs. In The 10th International Conference on Autonomous Agents and Multiagent Systems-Volume 1 (pp. 327-334).
**Q**: About the usage of the ordering of “left” and “right”\
**Answer**: In Line 126, we mentioned: “Let $N'_l$ and $N'_r$ be the left child and the right child of $N'\in \mathcal{N}$, respectively”, which means that each element $N'$ in a binary collection $\mathcal{N}$ has the binary division, i.e., it is divided into two disjoint sets $N'_l$ and $N'_r$. Based on this binary division, any joint action can be divided into two sub-joint actions, as mentioned in Line 132. Therefore, it is used when we divide any joint action into two sub-joint actions, e.g., in Example 1 and Eq.(3a).
---
Rebuttal Comment 1.1:
Comment: Thank you for your response.
---
Reply to Comment 1.1.1:
Comment: Thank you once again for your positive review. | Summary: The paper attempts an improvement to existing approaches to tackling the problem of optimal Nash equilibrium (NE) computation. In general, the problem is NP-hard.
Usually, a common approach in computing the optimal NE is to formulate it as the solution of a constrained mathematical program whose objective function assesses the optimality of a given NE while the feasibility set describes the set of all NE's. The solution space is nonconvex; commonly, the constraint set is relaxed to a convex one based on the set of correlation plans. The authors devise a a way to shrink the size of the underlying trees and shave-off a term that is exponential to the number of players $n$ from the running time complexity of solving a mixed integer bilinear program.
Strengths: * The analysis seems self-contained, and the previous work is well demonstrated.
* The paper is complemented with a good chunk of experiments.
Weaknesses: * The improvement in the complexity manages to get rid of a term $2^n$ for a term $n \log n$. Still though, in both cases, these terms are multiplied with a term $m^n$ (the number of actions $m$ to the power of the number $n$ of agents). For a game to be nontrivial, $m>1$.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: * Is the improvement on the term $2^n$ that crucial?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: The limitations are adequately discussed in my opinion.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you very much for your comments.
**Q**: Is the improvement on the term $2^n$ that crucial?\
**Answer**: Yes. Based on our general transformation framework, the straightforward approach is using the vanilla binary collection, but our CRM can generate a minimum binary collection to reduce time complexity. The improvement on the term $2^n$ is crucial because:\
1. Theoretically, it means that our proposed algorithm is significantly faster than the straightforward one. The reason is that our proposed algorithm requires significantly fewer bilinear terms than the straightforward one.\
2. Experimentally, our proposed algorithm is significantly faster than the baselines including the straightforward one, which validates our theoretical results. For example, for solving the GAMUT game called Random graphical shown in Table 1, our algorithm (i.e., CRM) used about 0.1 seconds, but the straightforward algorithm (i.e., MIBP) used more than 800 seconds. \
3. As we mentioned in the limitation section, we are handling a very hard problem, and it is unrealistic to expect that we could remove all exponential terms to obtain an extremely fast algorithm. Our algorithm is an attempt to make the computation of optimal NEs feasible, and our algorithmic framework can be built on by further innovative heuristics to improve the computation of optimal NEs. For example, as shown in our additional experiment results (see our response to Reviewer PoVn), our algorithm CRM can solve large-scale real-world games with the aid of the PSRO framework, even when we cannot enumerate all actions due to the memory constraint.
---
Rebuttal Comment 1.1:
Title: Thank you
Comment: I thank the authors for their response. I would like to raise my score to 5 as I am more positive after the rebuttal.
---
Reply to Comment 1.1.1:
Comment: Thank you very much for reviewing our response and raising the score. We welcome new comments if you have any remaining uncertainties. | null | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
DDCoT: Duty-Distinct Chain-of-Thought Prompting for Multimodal Reasoning in Language Models | Accept (poster) | Summary: This paper explore utilizing the rationales to achieve the multimodal reasoning based on LMs.
The author analyzes the challenges on using the LM to perform the multimodal reasoning such as Hallucination problem.
Based on the preliminary observations, the authors propose jointly exploit the reasoning ability in LLMs and the image understanding capability of visual question-answering models for general multimodal rationale generation.
Experimental results demonstrate that the proposed method named DDCoT is effective from the experimental results.
Strengths: This paper clearly expresses their motivation and analyzes the common problems when exploiting language models for multimodal reasoning.
1. The motivation is clear and the analysis of hallucination problem is comprehensive.
2. The performance of the proposed DDCoT is demonstrated to be superior to some common baselines.
Weaknesses: The method proposed in this paper seems to be a simple performance combination of large language model and existing visual question answering model, and mainly utilizes the ability of large language model to decompose questions. Such a kind of method has already been explored in the research direction of using large language models to call external tools, such as Visual Chatgpt, Augmented Language Models.
1. Although the logic of the proposed method is clear, the technical novelties seem to be insufficient. The proposed method is similar to the Self-asking COT, or ``Demonstrate-Search-Predict: Composing retrieval and language models for knowledge-intensive NLP’’. For the multimodal information interaction, this paper does not present the detailed description.
2. The claims in lines 75-76 show that this paper is the first to explore the zero-shot multimodal reasoning. However, they do not give the enough analysis of this point and do not consider the performance of pretrained VLMs such as OFA or Flamingo. In addition, similar methods such as ModCR[1], Mini GPT-4, BLIP-2, and others should be compared.
3. The proposed module Rational-Compressed Visual Embedding (RCVE) aims to compress visual input embeddings according to the multimodal rationales by filtering visual features. I am confused by this calculation. For example, Formula 1, it adopts global visual features as query vectors and rational information as Key and Value. This process is the integration of different information, which could be not in line with its purpose.
4. Deep-Layer Prompting (DLP) is similar to the Prompt tuning v2 method [2]. The whole approach should be improved. Missing the corresponding citation.
[1] Liu X, Ji K, Fu Y, et al. P-tuning v2: Prompt tuning can be comparable to fine-tuning universally across scales and tasks[J]. arXiv preprint arXiv:2110.07602, 2021.
[2] Guo J, Li J, Li D, et al. From Images to Textual Prompts: Zero-shot VQA with Frozen Large Language Models[J]. arXiv preprint arXiv:2212.10846, 2022.
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: See the above strength and weakness parts
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 3 good
Contribution: 2 fair
Limitations: None
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the feedback. Please find our responses below.
> **Response to the overall weaknesses:**
>
We thank the reviewer for the rigorous consideration. But we have to claim that our contributions and technical novelties are far beyond the “simple combination of LLMs and VQA models” (simple combination) and the “utilization of the ability of LLMs to decompose questions” (simple decomposition):
1. **We are the first to (1)** study and achieve zero-shot multimodal CoT rationale generation, **(2)** discover LLMs’ hallucinations are intensified with interleaved multimodal inputs and thus deeply analyze how to put visual information in the text to generate helpful rationales, and **(3)** propose a novel prompting method (DDCoT) to generate rationales works for both zero-shot and finetuning learning. Previous works and the mentioned Visual ChatGPT and Augmented Language Models do not explore these aspects.
2. **The above simple combination and decomposition cannot achieve multimodal rationale generation**.
1. Such a simple combination predisposes LLMs towards hallucination, generating fabricated rationales (see Lines 172-183). As a result, rationales simply generated from LLMs with captions offer significantly constrained assistance for multimodal reasoning (73.33 on IMG and 82.15 on average) in comparison to our approach (83.34 on IMG and 87.34 on average).
2. Also, the simple decomposition without considering “negative-space prompting” (Lines 194-205) or “integrating to joint reasoning” (Lines 214-223) cannot work well, i.e., -7.73% (Lines 15-19 in the Appendix) for the former, and -5.85% (refer to the respond table to the Reviewer ka68) for the latter on the IMG split, in comparison to our DDCoT.
3. Our DDCoT not only **significantly outperforms** previous methods but also exhibits impressive generalization ability and explainability.
> **Technical novelties** seem to be insufficient. The proposed method is similar to the Self-asking COT, or Demonstrate-Search-Predict.
>
Regarding the concern about our novelties, please refer to our “response to the overall weaknesses”.
The significant differences compared to Self-asking COT and Demonstrate-Search-Predict are:
1. **Focuses and motivations:** The Self-asking COT and Demonstrate-Search-Predict work solely on the language modality, and they have different focuses compared to our multimodal CoT rationale generation. In contrast, the new challenges and corresponding solutions brought by multimodality are what we focus on.
2. **Technicals**: Self-asking COT and Demonstrate-Search-Predict have not explored at least two of our core ideas: (1) distinct duties by considering the uncertainty, (2) integrating sub-question and answers to generate one coherent rationale instead of final prediction.
> **Multimodal reasoning:** The claims in lines 75-76 show that this paper is the first to explore the zero-shot multimodal reasoning. However, they do not give the enough analysis of this point and do not consider the performance of pretrained VLMs such as OFA or Flamingo. In addition, similar methods such as ModCR[1], Mini GPT-4, BLIP-2, and others should be compared.
>
1. This work is the first to study the “zero-shot **multimodal rationale generation”** (Lines 75-76) rather than “multimodal reasoning”.
2. In fact, our proposed rational generation is compatible with such pretrained VLMs and methods of multimodal reasoning. Without correct rationales, existing pretrained VLMs and multimodal reasoning models have difficulty in complex reasoning tasks. Fortunately, **the generalizable rationales generated by our DDCoT prompting can help existing VLMs** to comprehend visual information and reason with rich knowledge, achieving significant improvement of **11.14**% and **10.96**% based on Flamingo and Mini GPT-4 as the below table shows. Thanks for your feedback, and we will include the analysis and experiments.
| | NAT | SOC | LAN | TXT | IMG | NO | Avg |
| --- | --- | --- | --- | --- | --- | --- | --- |
| OFA | 5.91 | 0.11 | 13.18 | 13.05 | 0.30 | 11.85 | 6.58 |
| Flamingo | 21.89 | 52.41 | 20.27 | 23.50 | 39.11 | 19.02 | 27.87 |
| Flamingo with our R | 39.20 | 48.93 | 30.90 | 39.68 | 45.81 | 32.40 | 39.01 |
| Mini GPT-4 | 43.83 | 48.59 | 43.36 | 55.01 | 42.84 | 41.67 | 44.71 |
| Mini GPT-4 with our R | 57.37 | 62.32 | 46.82 | 65.91 | 56.72 | 48.57 | 55.67 |
| BLIP-2 | 67.40 | 56.36 | 52.45 | 67.01 | 62.77 | 53.58 | 61.21 |
| Ours | 88.72 | 86.84 | 84.91 | 87.59 | 83.34 | 88.08 | 87.34 |
> **Calculation in RCVE:** The proposed module Rational-Compressed Visual Embedding (RCVE) aims to compress visual input embeddings according to multimodal rationales by filtering visual features. I am confused by this calculation.
>
Formula 1 and Formula 2 work together to form the RCVE module. Global visual features are updated based on the similarity to the input rationales in Formula 1. In formula 2, the updated visual features first reshape to $N_r$ low-rank intermediate vectors and then capture local visual information through the attention mechanism. These processes culminate in the compression of local visual information into $N_r$ visual embeddings.
> **Citation:** Deep-Layer Prompting (DLP) is similar to the Prompt tuning v2 method [2]. The whole approach should be improved. Missing the corresponding citation [1][2].
>
Thanks for your valuable feedback. We will cite and discuss the shallow prompting [1,2,3,4] and deep prompting [5,6,7,8] methods, which are common in prompting learning. Our DLP is used to facilitate the alignment of visual and linguistic semantics at a shallow level and to combine with RCVE to utilize rationale to encode multimodality jointly at each layer.
See our response to Reviewer aDZq for the citations [1-8].
---
Rebuttal Comment 1.1:
Title: Official Comments from ndag
Comment: The rebuttal addresses my comments. The paper does have merits. Therefore, I am now tending to accept the paper. | Summary: This paper proposes a novel DDCoT prompting that maintains a critical attitude through negative-space prompting and incorporates multimodality into reasoning by first dividing the reasoning responsibility of LLMs into reasoning and recognition and then integrating the visual recognition capability of visual models into the joint reasoning process.
Strengths: - The authors deeply analyze the challenges and insights in multimodal CoT for the rationale generation.
- The authors propose a novel DDCoT prompting to maintain a critical attitude and identify reasoning and recognition responsibilities through the combined effect of negative-space design and deconstruction.
- The experiments show the superiority of the proposed method.
Weaknesses: - Are the analysis results of the single case in section 3.1.1 and 3.1.2 applicable to most samples in the data set? In addition to chatgpt, whether gpt-3 can reach similar conclusions?
- On what basis are the three principles in 3.2 considered?
- Why the “uncertainty” are called as a negative space?
- The statement ‘’learnable prompts to facilitate the alignment of visual and linguistic semantics at a shallow level but also utilizes explicit rationale to jointly encode multimodality by learning different prompts for each encoder layer.’’ should cite related papers.
- There are some minor errors in this paper. For example, “exploring to introduce programming approach” should be “exploring to introduce a programming approach”.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: See weakness.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations: See weakness.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the valuable feedback. Please find our responses below.
> **General insights:** Are the analysis results of the single case in section 3.1.1 and 3.1.2 applicable to most samples in the data set?
>
Our findings in 3.1.1 and 3.1.2 are applicable to most samples. The applicability of “the different roles of rationales in zero-shot and fine-tuning learning” (3.1.1) possesses an intuitive nature and extends to generic scenarios. Similarly, the discovery “interleaved information without careful design exacerbates hallucinations” in section 3.1.2 remains pertinent for a substantial portion of the dataset, making it a noteworthy phenomenon. We present the quantitative analysis of hallucinations (see Appendix A.2 for details) and a new experiment for the discovery in 3.1.2 as depicted below:
| Method | Authenticity |
| --- | --- |
| w/o visual information | 0.883 |
| w/ visual inforamtion | 0.602 |
| w/ visual information & duty distinct w/o uncertainty | 0.783 |
| w/ visual information & duty distinct w uncertainty R | 0.855 |
As shown in the table, introducing visual information exacerbates the hallucinations and diminishes the authenticity of generated rationales by 28.1% compared with naive prompting devoid of any visual information input. Our "duty distinct" and "uncertainty" designs effectively alleviate the hallucinations, elevating authenticity to a level comparable to scenarios without visual information input.
Thanks for the valuable comments, and we will add the user study and analysis in the revised paper.
> **Can gpt-3 reach similar conclusions as chatgpt?**
>
GPT-3 can reach similar conclusions as ChatGPT, in terms of ”difficulty in understanding dense image information” (3.1), ”rationale-sensitive reasoning” (3.1.1) and ”intensified hallucinations with visual information” (3.1.2). We provide additional illustrative instances in Appendix A.3, and the similar performance of our method when using GPT-3 and ChatGPT can also demonstrate that.
> **On what basis are the three principles in 3.2 considered?**
>
We consider the three principles from the following insights:
- For the principle “utilize LLMs’ intrinsic knowledge to generate multimodal rationales”: Rationales are necessary for multimodal comprehension with LLM because LLM has difficulty jointly reasoning multimodal information (3.1). Besides, flexible rationales are required to be knowledge-enriched (3.1.1), so LLMs become a natural choice for rationale generation.
- For principle "explicitly cue the LLMs to differentiate the responsibilities of reasoning and recognition step by step": The challenge of hallucinations is exacerbated when introducing interleaved multimodal information as LLMs would fabricate visual information (3.1.2). Explicit duty distinct prompting is an optional solution for fabricated visual information, compelling the LLMs to identify the recognition responsibility of off-the-shelf visual models and visual models to obtain visual facts. Similarly, the responsibility for reasoning needs to be assigned to LLMs with reasoning capabilities.
- For principle “explicitly mark the negative space for uncertain parts, emphasizing critical thinking in rationale generation”: Considering the rationale-sensitive reasoning for zero-shot prompting (3.1.1), the authenticity of rationales becomes significant. Uncertainty facilitates the LLMs to maintain a critical attitude towards questions and supplementary visual information, thereby enhancing the authenticity of the generated rationales.
Thanks for the valuable feedback, and we will revise the overview of 3.2 to add the above clarification.
> **Why the “uncertainty” are called as a negative space?**
>
The "negative space prompting" refers to our prompting method, including decomposition and uncertainty. The multi-modal CoT in our approach is decomposed into multiple sub-questions with "spaces", where the "space" that can be answered by the LLM is "positive", and otherwise the "space" is "negative" to be filled, i.e. the uncertainty. We intend to incorporate both decomposition and uncertainty into the name. If the clarification remains unclear, we will replace "negative space prompting" with "uncertainty prompting". Thank you for your feedback. We will carefully polish the paper and modify the confusing terms to facilitate understanding.
> **Citation:** The statement ‘’learnable prompts to facilitate the alignment of visual and linguistic semantics at a shallow level but also utilizes explicit rationale to jointly encode multimodality by learning different prompts for each encoder layer.’’ should cite related papers.
>
Thanks for the valuable feedback, and we will cite and discuss the shallow prompting [1,2,3,4] and deep prompting [5,6,7,8] methods.
> **Minor errors:** There are some minor errors in this paper. For example, “exploring to introduce programming approach” should be “exploring to introduce a programming approach”.
>
Thanks for pointing out this, and we will polish the paper in the revision.
[1] Learning to prompt for vision-language models. Zhou, Kaiyang, et al. IJCV2022.
[2] Conditional prompt learning for vision-language models. Zhou, Kaiyang, et al. CVPR2022.
[3] Dualcoop: Fast adaptation to multi-label recognition with limited annotations. Sun, Ximeng, Ping Hu, and Kate Saenko. NeurIPS2022
[4] Prompt distribution learning. Lu, Yuning, et al. CVPR2022.
[5] Visual prompt tuning. Jia, Menglin, et al. ECCV2022.
[6] Maple: Multi-modal prompt learning. Khattak, Muhammad Uzair, et al. CVPR2023.
[7] P-tuning: Prompt tuning can be comparable to fine-tuning across scales and tasks. Liu, Xiao, et al. ACL2022.
[8] P-tuning v2: Prompt tuning can be comparable to fine-tuning universally across scales and tasks. Liu X, Ji K, Fu Y, et al. arXiv:2110.07602, 2021. | Summary: The work focuses on solving multimodal reasoning task. In the zero-shot setting, the work prompts LLM to conduct step-by-step reasoning. To avoid hallucination due to the lack of image features, LLM is asked to leave the answer to sub-questions as “uncertain” if they involve images. Then the corresponding sub-questions are answered by the visual components. In the fine-tuning setting, the generated rationales are used to train an LM augmented with a visual encoder. Experiments show that the elicited multimodal rationales can improve performance in both settings.
Strengths: - Originality: The work adopts careful prompt engineering that encourages LLM to (1) offload the step of visual recognition to the visual component and (2) discard errors in the intermediate results. Both are shown to be effective in mitigating hallucination and robust respectively.
- Quality: The work investigates both zero-shot evaluation and fine-tuning to validate the effectiveness of the generated multimodal rationales. Comprehensive experiments and ablation studies are conducted to justify the design choices.
- Clarity: The paper is well-written with illustrative running examples and figures. The intuitions behind each design choice are clearly stated, which helps readers to understand both the technical problems and motivations.
- Significance: The proposed method brings significant improvement to a particular dataset, and also shows better generalization.
Weaknesses: 1. The key ideas the paper tries to convey are somewhat scattered. The prompt engineering part and the visual component can be individually stand-alone as independent work. Right now, it is also hard to tell which part matters more. Perhaps the authors can add experiments where (1) a baseline model is trained on the rationales generated by the proposed method and (2) the proposed augmented model is trained on human-annotated rationales.
2. Only one dataset is chosen. And many questions in ScienceQA do not require images to be answered indeed. I would encourage the authors to explore more suitable datasets.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. How do you sample the rationales for fine-tuning? Do you only sample those leading to the correct predictions?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: The authors have clearly stated the risks/biases brought by LLMs which are the common problems for LLMs.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your insightful and helpful comments. Our detailed responses are available below for your consideration.
> **Somewhat scattered key ideas (part 1).** The prompt engineering part and the visual component can be individually stand-alone as independent work.
We appreciate the review’s acknowledgment of our DDCoT prompting and visual components. Both DDCoT prompting and visual components facilitate inducing visual information to language models for multimodal reasoning. The former focuses on generating high-quality multimodal rationales. The latter aims at optimizing the utilization of those generated rationales and visual features, as visual information in images cannot be fully converted to the text inputs of LLMs. Thanks for your valuable feedback, and we will include a comprehensive perspective on the two technologies at the outset of Section 3.2.
> **Somewhat scattered key ideas (part 2).** Which part of this paper matters more? Perhaps add experiments where (1) a baseline model is trained on the rationales generated by the proposed method and (2) the proposed augmented model is trained on human-annotated rationales.
Thanks for the valuable suggestion. We also agree with the necessity to demonstrate the extent of impact exerted by the two components, and we present the results in the following table:
| | IMG | TXT | Avg |
| ----------------- | ----- | ----- | ----- |
| baseline | 72.93 | 85.84 | 79.7 |
| baseline + our R | 75.81 | 82.40 | 82.83 |
| baseline + gt R | 81.07 | 84.07 | 85.97 |
| our model | 75.16 | 85.25 | 80.45 |
| our model + our R | 83.34 | 91.2 | 87.34 |
| our model + gt R | 84.43 | 92.09 | 88.00 |
As shown in the table, we observe that our R and visual components individually exhibit certain gains in terms of IMG improvement. However, **when combined, they yield substantial gains.** This observation aligns with our response above.
Besides, please note that the annotated ground truth rationales within the ScienceQA dataset inherently encompass the final prediction, i.e., correct answers. To ensure a fair comparison, we manually exclude the answers from these annotations, using the remaining text as input rationales for fine-tuning. Under this configuration, our proposed rationale achieves a fine-tuning performance comparable to the annotated rationales.
Thanks for pointing out this, and we will add the experiments and analysis in the revised paper.
> **Only one dataset is chosen.** And many questions in ScienceQA do not require images to be answered indeed. I would encourage the authors to explore more suitable datasets.
Thanks for your valuable feedback. The ScienceQA indeed includes some questions that do not require images to answer, but we achieve significant improvement in its IMG split (requiring images to answer). Currently, ScienceQA is a suitable dataset as it incorporates multimodality and needs for complex reasoning.
We agree that extending our approach to more appropriate datasets would be beneficial. While other existing datasets may not fully exploit the benefits of our approach, we venture into exploring the captioning task on NoCaps and the video question-answering task on MSVD-QA:
| | | | NoCaps | | MSVD-QA |
| ------ | ------ | ------------- | ------------------ | ------------------- | ------- |
| | CIDEr | SkipThoughtCS | EmbeddingAverageCS | GreedyMatchingScore | Acc |
| BLIP-2 | 76.15 | 49.84 | 89.20 | 77.94 | 34.4 |
| Ours | 46.26 | 84.78 | 92.35 | 79.12 | 39.3 |
**For captioning,** we prompt the LLM to solve sub-problems derived from a simple caption, aiming to optimize and enrich it using the corresponding sub-answers. The substantial knowledge within LLM enables the generation of semantically enriched captions, leading to improvements in metrics evaluating sentence semantics, i.e. 34.94%, 3.15%, and 1.18% in terms of SkipThoughtCS, EmbeddingAverageCS, and GreedyMatchingScore. Note that the CIDEr metric to evaluate ours is limiting. It is designed to measure the similarity between the tested caption and reference captions without considering the diversity and high-level semantics.
**For video question answering,** LLM deconstructs problems like the decomposition step on ScienceQA. We sample video frames for VQA recognition and integrate frame information for multimodal rationale and answers. Leveraging the sequence understanding in LLM and visual information returned by the VQA model, we achieve a 4.9% improvement over BLIP-2.
Due to time limitations, we randomly evaluated only 1000 images from NoCaps and 1000 videos from the MSVD test dataset in a zero-shot setting. In the revised paper, we intend to extend our experiments to encompass additional tasks in both zero-shot and fine-tuning settings.
> **How do you sample the rationales for fine-tuning?** Do you only sample those leading to the correct predictions?
For fine-tuning, we employ all the available rationales without resorting to sampling. | Summary: The paper proposes Duty-Distinct Chain-of-Thought (DDCoT) Prompting for multimodal reasoning problems (e.g. VQA). Despite CoT's success in language-only reasoning problems, authors argue that multimodal reasoning challenges CoT as the rationale part is sensitive to the input information. Since image caption is the only source of image information for LLM, once the caption is generated poorly, rationale will intensity hallucinations.
To this end, DDCoT proposes two insights: "keep critical thinking" and "disentangle reasoning and recognition". Specifically, the pipeline is like:
1. Given input question, zero-shot prompt LLM: “please think step-by-step and deconstruct the question down to necessary sub-questions"
2. Determine if each subquestion can be answered without visual information with zero-shot prompt: “Assume that you do not have any information about the picture, try to answer the sub-question and formulate the corresponding sub-answer as ‘Uncertain’ if the sub-question cannot be determined”. This helps deal with hallucination (i.e. LLM makes up facts about images).
3. Use an off-the-shelf VQA model to answer each subquestion "with negative space" (my guess is, each subquestion with 'Uncertain' as answer?)
4. Given subquestions and answers, LLM is prompted to perform CoT reasoning, generating a rationale and a final answer.
The paper also considers a finetuning setup with some architecture inventions.
Results on ScienceQA show
- zero-shot prompting: 1-3% improvement from previous few-shot CoT baselines
- finetuning: more impressive improvements and generalization from baselines.
Analysis on visual information source, rationale generation process, fine-tuning components, and explainability are also conducted to further justify DDCoT.
Strengths: - Multimodal reasoning (e.g. VQA) is an important research direction, and LLM prompting is an emerging and promising approach to it. The paper highlights limitations of existing CoT-based methods for VQA (sensitive to caption; rationale does not improve performance), and propose reasonable solutions to these problems.
- DDCoT proposes an interesting heuristic way to combine LLM and off-the-shelf VQA models as a tool: LLM decomposes questions, decide which parts to call VQA model, and aggregates information back for answer. The approach could be potentially valuable for other tasks (involving video, code, webpages...)
- The consideration of both prompting and finetuning setups add to technical depth.
- Experiments show improvements, and their design looks solid. I appreciate comprehensive and diverse ablations and analysis.
Weaknesses: - The presentation is a bit poor. The concrete problem is just "how to best put visual information in text, so that rationale can help", and the main insight is just "use VQA models across subquestions to best induce visual information, instead of off-the-shelf captioning". But it takes me some time to get it. I don't think "Flexibility/generalizability/explainability" makes too much sense to me --- I don't think few-shot CoT is really labor-intensive (just 3 examples??), inflexible, or hard to explain. The main motivation should be performance and generalization. 3.1.1 seems commonly known and a bit redundant. So intro, 3.1, 3.1.1, 3.1.2 are like four different stories of motivation, and a bit confusing to me. Why not sticking to one single story, instead of writing a story in 4 different ways/views/places?
- Continuing on the presentation issue, I don't get Figure 1,2,3,4,5 --- all are examples, and in fact, Figure 5 should better be the teaser given it actually tells a bit about how the method works. Fig 1's example doesn't really tell much, as MM-CoT and UnifiedQA's rationales are omitted. etc.
- Continuing on the presentation issue, some terms are confusing (or at least lack explaining), e.g. "negative space prompting".
- While finetuning performances seem stronger, prompting performance is only slightly better than few-shot CoT baselines, and there are not a lot of prompting baselines to begin with. In the long run, seems prompting performance will be more important than finetuning performance (one can also imagine multi-modal GPT-4 just solves ScienceQA and beats every model, making it less worthwhile to study...)
- The results are only on ScienceQA, not only how general the findings are, given original CoT is evaluated on various problems, and multimodal CoT should be able to solve a lot of different tasks as well.
Technical Quality: 3 good
Clarity: 1 poor
Questions for Authors: I think the paper can benefit from better writing, and experiments from different datasets.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 1 poor
Contribution: 2 fair
Limitations: Appendix E talks about hallucination and social bias.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the valuable feedback and the concerns about the presentation. We carefully address your questions and comments below and will improve the writing accordingly.
> **The presentation is a bit poor (part 1)**: The concrete problem and main insight. The motivation and organization structure of Sec 3.
Thanks for your detailed review. As the reviewer points out, our core problem is **how to best put visual information in the text to generate helpful rationales.** In order to address this problem, the below two explorations on helpful rationales and visual information are needed, which we present in Sec 3.1:
- What kind of **rationales is helpful** and generalizable? (see Sec 3.1.1 for details)
- What challenges and issues will there be in **putting visual information** for rationale generation? (see Sec 3.1.2 for details)
Based on the explorations, we summarize key principles to generate helpful rationales and propose the concrete DDCoT prompting method in Sec 3.2:
- As the reviewer points out, using VQA models across subquestions to best induce visual information is one of the main insights in DDCoT prompting.
- In addition, the “sub-questions with negative-space prompting” (Line 191) and “integrate to joint reasoning” (Line 214) are also the main and crucial parts. The former is used to mitigate hallucination, and the latter help to generate coherent and logical rationale rather than scattered factual sub-questions and answers.
The writing structure in “Section 3 Method” is presented from concept exploration (Sec 3.1) to concrete method design (Sec 3.2). We appreciate your valuable feedback, and we will provide an overview at the beginning of Section 3 to introduce our core problem, organizational structure, and main content of each subsection. Also, we will carefully revise the writing in each subsection accordingly.
> **The presentation is a bit poor (part 2)**: I don't think few-shot CoT is really labor-intensive, inflexible, or hard to explain.
It is labor-intensive for fully-supervised methods that require ground-truth rationale annotations. For few-shot CoT methods, we agree that providing a few examples of rationales for one case is not expensive. However, automatically searching for appropriate examples for a large set of diverse questions takes work (another promising direction, but not the scope of this paper), as CoT methods are greatly sensitive to the reasoning complexity and task-specific demonstration. That is also why we believe generalizability is crucial and currently focus on the zero-shot setting, which can generalize to different questions.
> **Continuing on the presentation issue:** The information in Figures 1,2,3,4,5; Fig 5 should better be the teaser. Fig 1's example doesn't really tell much.
As the reviewer raises in the previous weakness, “the main motivation should be performance and generalization,” we also agree that generalization and performance are very crucial aspects, which thus be presented in Fig 1 to be emphasized. Fig 1 (a) shows a simple out-of-distribution example to illustrate the poor generalization ability of previous methods compared to ours. And Fig 1 (b) shows the performance comparison in different settings. Figures 2, 3, and 4 are examples to help better demonstrate and understand our insights in Sec 3.1, 3.1.1, and 3.1.2, respectively.
Thanks for the feedback and suggestion. We will combine our most crucial insights in Fig 4 and the method illustration in Fig 5 to redesign and update a more appropriate teaser.
> **Continuing on the presentation issue:** Some terms are confusing (or at least lack explaining), e.g. "negative space prompting".
The "negative space prompting" refers to our prompting method, including decomposition and uncertainty. We decompose multi-modal CoT into multiple sub-questions with "spaces", where the "space" are "positive" if LLMs can answer the sub-question, and otherwise, the "space" is "negative" to be filled, i.e. the uncertainty. We intend to include both decomposition and uncertainty in the name. If the explanation is still confusing, we will replace "negative space prompting" with "uncertainty prompting". Thank you for your feedback. We will carefully polish the paper and modify the confusing terms to facilitate understanding.
> **Finetuning performances seem stronger.** Prompting performance will be more important.
The performance improvement on finetuning is more significant than on zero-shot setting. This is understandable as the complete information of an image cannot be entirely translated into the text input for LLMs. Note that our prompting method with ChatGPT surpasses the few-shot CoT baseline by 4.61% in the IMG split (Line 285), which is significant and improves more than in the Avg split. Besides, we observe that our technique can benefit more when the used LLMs strengthen (2.53% for GPT-3 and 4.61% for ChatGPT) (Line 284).
> **Multimodal CoT should be able to solve a lot of different tasks as well.**
We appreciate the reviewer’s recognition of the potentially valuable of our approach, and we agree that multimodal CoT could be applied to various tasks as well. Per your advice, we further conduct experiments on captioning and video question answering tasks. The below table presents the general effectiveness of our approach across various tasks.
|||| NoCaps|| MSVD-QA |
| - | - | - | - | - | - |
|| CIDEr | SkipThoughtCS | EmbeddingAverageCS | GreedyMatchingScore | Acc|
| BLIP-2 | 76.15 | 49.84| 89.20| 77.94| 34.4|
| Ours | 46.26 | 84.78| 92.35| 79.12| 39.3|
Please refer to our third response to the Reviewer kQGJ for the analysis. Thanks for the valuable suggestion, and we will explore various tasks in zero-shot and finetuning settings in the revised paper.
---
Rebuttal Comment 1.1:
Title: Thanks
Comment: increased my score from 5 to 6 in light of the rebuttal
---
Reply to Comment 1.1.1:
Title: Thank you for your encouraging response!
Comment: Thank you for your encouraging response! We deeply appreciate your thorough and invaluable feedback, and we will refine our paper accordingly. | Rebuttal 1:
Rebuttal: We really appreciate all reviewers for their valuable feedback. Our code will be made public upon acceptance.
We are encouraged by the reviewers’ recognition of our novel/interesting contribution (ka68, oVaJ, aDZq), solid and robust technical design (oVaJ, kQGJ, aDZq), compelling performance improvement (ka68, oVaJ, kQGJ, aDZq, ndag) and diverse ablation studies (ka68, oVaJ, kQGJ).
------
In our individual replies, we attempted to address specific questions and comments as clearly and detailed as possible. Here, we briefly summarize these additional experiments and evaluations:
- Ablations on the 'Integrate to Joint Reasoning' part.
- Additional experiments on Captioning and Video Question Answering tasks.
- Quantitative ablations on our DDCoT and visual components.
- Additional user study for validating the generalization of the discovery in 3.1.2.
- Quantitative comparison with existing pre-trained VLMs and multimodal reasoning models: Our proposed rationale generation is compatible with such pre-trained VLMs.
------
We hope that these additional results further strengthen DDCoT’s position as the state-of-the-art multimodal Chain-of-Thought (CoT) approach:
- We are the **FIRST** to study and achieve **zero-shot multimodal rationale generation**, considering flexibility, generalizability, and explainability.
- We deeply analyze the challenges of **putting visual information in the text to generate rationales** (Sec 3.1) and **induct the critical principles** (Lines 187-190) in generating flexible and generalizable multimodal rationales using LLMs: (1) Drawing on the LLMs' intrinsic knowledge, (2) differentiating the responsibilities of reasoning and recognition, and (3) emphasizing critical thinking in the face of uncertainty.
- We propose zero-shot DDCoT prompting to generate multimodal rationales that significantly improve the multimodal reasoning abilities of LMs in both zero-shot prompting and fine-tuning learning while exhibiting impressive generalization ability. | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: 1. The paper studied the challenges and limitations in rationale generation for multimodal problems. Then the authors propose a Duty-Distinct Chain-of-Thought Prompting (DDCoT) to collect language-related or visual-related information and select valid information to generate the rationale.
2. The rationale can be used in zero-shot and fine-tuning settings for question answering. The authors designed a fine-tuning framework with deep-layer prompting and rationale-compressed visual embedding.
3. The experiment results demonstrate the effectiveness of the rationale generated by the proposed method with SOTA results on the ScienceQA benchmark.
Strengths: 1. The proposed prompting design to separate the text reasoning and visual information for multi-modal QA problems is novel.
2. The result on the ScienceQA dataset is significant in zero-shot and fine-tuning settings, the ablation shows the effectiveness of different parts of the model.
Weaknesses: 1. Some experiment settings are not clearly explained or confused. See questions.
2. It's better to validate the effectiveness of 'Integrate to Joint Reasoning' part of prompting in the ablation if it's an important part of the method.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: 1. The baseline(B) model in Table 2 is not explained.
2. Does 'B+our img' in Table 2 use rational-compressed visual embedding or not? What is the difference between 'B+our img' and 'w/ our R'? In line 308, it seems that the 'B+our img' includes rational-compressed visual embedding.
3. Does 'w/o DLP' in Table 3 include RCVE?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: The limitation is not discussed in the paper. Societal impact is discussed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the helpful review. We carefully address your questions and comments below.
> **Some experiment settings** are not clearly explained or confused. See questions.
Thank you for the valuable feedback, and we will clearly explain experimental settings in the revised paper and address the confusion. Below, we explain the questions one by one.
> **The baseline(B) model in Table 2** is not explained.
Following [1], we adopt the T5-Base with only text input (i.e. questions, contexts, and options) to predict answers as our baseline (B). All image-related information is omitted, given that T5 operates solely as a language model. Thanks for pointing out this, and we will include the explanation in the revision.
[1] “Multimodal Chain-of-Thought Reasoning in Language Models” Zhang, Zhuosheng, et al. *arXiv preprint arXiv:2302.00923* (2023).
> Does **'B+our img' in Table 2** use rational-compressed visual embedding or not? What is the difference between 'B+our img' and 'w/ our R'?
- 'B+our img' in Table 2(b) uses both rational-compressed visual embedding and deep-layer prompting. The first four experiments in Table 2(b) focus on how to utilize visual information in different modalities. In comparison to captions, image modality presents a more challenging scenario for the model to comprehend visual information. Our proposed deep-layer prompting and rational-compressed visual embedding facilitate the model's understanding and learning of the alignment between different modalities, ultimately enabling the extraction and utilization of image features.
- 'B+our img' in the first four lines is identical to 'no R' in the below three lines. The below three lines intend to elucidate the effects of rationales generated from different methods. Utilizing 'B+our img' as a foundation, the rationale produced by our DDCoT aids the model in better comprehending and reasoning for the multimodal context, resulting in notable performance improvement in the IMG split ('w/ our R').
> Does **'w/o DLP' in Table 3** include RCVE?
The 'w/o DLP' in Table 3 does not include RCVE. The notation 'w/o DLP' refers to the condition 'w/o DLP & w/o RCVE'. We will replace the notation 'w/o DLP' with 'w/o DLP & w/o RCVE' in our revised paper.
> **Ablations on 'Integrate to Joint Reasoning':** It's better to validate the effectiveness of 'Integrate to Joint Reasoning' part of prompting in the ablation if it's an important part of the method.
Thanks for the valuable feedback. We agree that validating the effectiveness of 'Integrate to Joint Reasoning' is essential, given its integral role within our approach. We present the ablation studies for this component in the following table:
| | IMG | TXT | Avg |
| ---------------- | ----- | ----- | ----- |
| No R | 75.16 | 85.25 | 80.45 |
| naive R | 75.06 | 89.61 | 82.96 |
| sub-qa as R | 77.49 | 82.36 | 83.75 |
| our R | 83.34 | 91.2 | 87.34 |
As shown in the table, the absence of the 'Integrate to Joint Reasoning' part also leads to a decline in performance, specifically -5.85% for the IMG split and -3.59% for overall performance. The decline is attributed to the model’s struggle to comprehend scattered facts instead of coherent reasoning chains, coupled with its limited reasoning capabilities.
Thanks for the suggestion, and we will include the ablation and analysis of 'Integrate to Joint Reasoning' part of prompting in the revised paper.
> **The limitation** is not discussed in the paper. Societal impact is discussed.
We discuss the limitations across the following aspects (see Appendix. E for details): (1) The challenge of hallucinations in multimodal reasoning remains partially unresolved. (2) Cross-modality pretraining is anticipated to enhance the efficacy of our methods further. (3) The social biases introduced by the LLMs.
---
Rebuttal Comment 1.1:
Comment: Thanks to the authors for the clarifications! However, the reviewer still has the following concerns:
1. Why only evaluate on ScienceQA benchmark? There are several visual reasoning that rationale generation may help such as A-OKVQA [1]. The authors propose a rationale-focus method. I believe it's better to discuss what kind of problems and datasets it suits.
2. The contribution is somewhat limited as the technical difference between RCVE module and former visual-text attention methods such as Q-former in BLIP2, perceiver in Flamingo, etc, seems limited [2,3,4]. Also, the novelty of DLP module, as mentioned by reviewer ndag.
[1] A-OKVQA: A Benchmark for Visual Question Answering using World Knowledge
[2] BLIP-2: Bootstrapping Language-Image Pre-training with Frozen Image Encoders and Large Language Models
[3] Flamingo: a Visual Language Model for Few-Shot Learning
[4] Perceiver: General Perception with Iterative Attention
---
Reply to Comment 1.1.1:
Title: Response to concern #1: Benchmark
Comment: Thank you for the response and for posting the concerns to discuss!
> Why only evaluate on ScienceQA benchmark? There are several visual reasoning that rationale generation may help such as A-OKVQA [1]. The authors propose a rationale-focus method. I believe it's better to discuss what kind of problems and datasets it suits.
>
- We evaluate on ScienceQA benchmark because (1) it is used to diagnose the **multi-hop reasoning ability and interpretability** when answering multimodal science questions. Such requirements of multi-hop reasoning and interpretability enable ScienceQA suitable for evaluating the ability in multimodal rationale generation (the rationale reveals **the interpretable process of multi-hop reasoning**) and the rationale’s effects in multimodal reasoning. (2) “the goal of SCIENCEQA is to aid development of a reliable model that is capable of generating **a coherent chain of thought** when arriving at the correct answer” (copy from ScienceQA paper). As we employ CoT prompting to generate rationales and use rationales as a kind of CoT to arrive at answers, ScienceQA is pretty suitable for evaluating our DDCoT prompting’s effectiveness. (3) ScienceQA features **rich** domain diversity (natural science, social science, and language science), context diversity, and level diversity with different grade-level science exams, and thus has multiple splits. These features are suitable for evaluating our method **in different and general situations**.
- Although ScienceQA is the most suitest dataset, we agree with the reviewer that our rationale generation could be applied to other suitable datasets and tasks as well (reviewer oVaj and kQGJ also pointed out this). We have conducted experiments on NoCaps for the captioning task and MSVD-QA for the video question answering task, leading to performance improvement. Please see our responses to Reviewer oVaj or kQGJ for detailed results and analysis.
---
Reply to Comment 1.1.2:
Title: Response to concern #2: Technical Difference
Comment: > The contribution is somewhat limited as the technical difference between RCVE module and former visual-text attention methods such as Q-former in BLIP2, perceiver in Flamingo, etc, seems limited [2,3,4]. Also, the novelty of DLP module, as mentioned by reviewer ndag.
>
Thank you for your rigorous consideration. And we respond to our technical contributions and differences from former works in below three different aspects:
- **Our contributions and technical novelties are far more than proposing modules (RCVE+DLP) for fine-tuning learning** (”Utilization for Fine-tuning Learning” in Sec 3.3). Instead, ”Utilization for Fine-tuning Learning” **is only one part of one of our three core contributions**, which is proposed to validate that our rationales are not only helpful for multimodal reasoning in zero-shot prompting but also in fine-tuning learning. Our core contributions and novelties are:
1. We are the **first** to study and achieve **zero-shot multimodal rationale generation**, considering flexibility, generalizability, and explainability. We deeply analyze the challenges of **putting visual information in the text to generate rationales** (Sec 3.1) and further **induct the critical principles** (Lines 187-190) in generating flexible and generalizable multimodal rationales using LLMs. These challenges (such as the discovery that LLMs’ hallucinations are intensified with interleaved multimodal inputs), analysis, and principles are not fully explored by previous works.
2. We propose zero-shot DDCoT prompting to generate multimodal rationales (Sec 3.2), which consists of three steps: (1) decompose the question into sub-questions **with negative-spacing prompting by considering the uncertainty.** Negative-space prompting with uncertainty brings 7.73% performance improvement. Please refer to Table 1 and Table 2 in Appendix, (2) visual recognition to obtain visual complements, and (3) **integration to joint reasoning** (see first-round “Rebuttal by Authors” for the importance of this design). The former methods do not explore these highlighted aspects.
3. We propose the utilization of rationales to improve LMs’ multimodal reasoning (Sec 3.3). To show the effectiveness and generalizability, we achieve multimodal Ireasoning in both settings: (1) **zero-shot prompting** and (2) **finetuning learning** (*the contribution mentioned by the reviewer*). Our methods significantly improve performance in both settings while exhibiting impressive generalization ability.
- Regarding the finetuning modules themself, our RCVE and DLP are also different from former visual-text attention methods.
1. Q-former consists of two transformer submodules: image transformer and text transformer with share self-attention layers. The design is in order to make Q-former applicable to multiple different pre-training tasks simultaneously. In contrast, our approach utilizes a single transformer with cross-attention layers to compact visual embeddings from rationale input. And our finetuning modules are directly supervised by the training objectives of the downstream task.
2. Perceiver in Flamingo maps diverse-sized feature maps into a few visual tokens, disregarding the text context. In contrast, our RCVE first engages with the rationales generated by our DDCoT approach, and subsequently compresses visual features, taking into account the guidance provided by the rationales.
3. While DLP is a common design in prompt learning, it diverges in role in our work from previous research. Our DLP not only facilitates the alignment of vision and language modalities at a shallow level but also collaborates with RCVE to utilize rationales for joint multimodal encoding at every layer.
- Our proposed rationale generation can be compatible with vision-language models, such as the mentioned Flamingo and recent Mini GPT-4. We first generate rationales by our DDCoT prompting and employ the rationales as extra inputs to Flamnigo and Mini GPT-4. Our DDCoT prompting significantly improves Flamingo and Mini GPT-4 by 11.14%, and 10.96%, respectively.
| | NAT | SOC | LAN | TXT | IMG | NO | Avg |
| --- | --- | --- | --- | --- | --- | --- | --- |
| Flamingo | 21.89 | 52.41 | 20.27 | 23.50 | 39.11 | 19.02 | 27.87 |
| Flamingo with our rationales | 39.20 | 48.93 | 30.90 | 39.68 | 45.81 | 32.40 | 39.01 |
| Mini GPT-4 | 43.83 | 48.59 | 43.36 | 55.01 | 42.84 | 41.67 | 44.71 |
| Mini GPT-4 with our rationales | 57.37 | 62.32 | 46.82 | 65.91 | 56.72 | 48.57 | 55.67 | | null | null | null | null | null | null |
Multi-task learning with summary statistics | Accept (poster) | Summary: The work considers multi-task learning in settings where for each task only summary statistics X.T@Y and ẍ.T@ẍ are made available. The setting is motivated by healthcare and biomedical related research, where sharing individual level microdata is restricted by regulations due to privacy concerns. The authors assume a linear and sparse or low-rank underlying model. A regularized least-squares type of optimization framework is used, and it is highlighted that only summary statistics are needed to solve it. A main contribution of the paper is theoretical analysis bounding the the quality of the sparse or low-rank estimator taking into account quantities such as overlap and distributional shift between the so called discovery and proxy data used to calculate the summary statistics. Additionally, the problem of hyperparameter tuning is considered as basic (cross-)validation methods are not applicable without access to individual data, and Lepski’s method is proposed as a practical alternative that requires only summary level data as well as some prior knowledge in form of a constant C parameter. Experiments on simulated data show results that are consistent with the theoretical analysis.
Strengths: - In general, approaches that allow learning from summary statistics are becoming ever more relevant to the community as regulation on sharing individual level data is tightening
- The exact formulation of the multi-task summary statistic learning problem considered in the paper, where separate discovery and proxy data sets are assumed appears to be novel, as consequently the theoretical bounds that take this into account provide also novel insights about the effects this can have on learning.
- Both L1,2 and nuclear norm regularization based variants of the learning problem are analyzed
- The proposed approach for model selection provides a practical tool for hyperparameter tuning when only summary statistics are available, and might have implications also for other variations of the problem setting?
- Fairly clear writing and technically rigorous
Weaknesses: - Real-world relevance: while applications such as polygenic risk prediction are mentioned, there are no convincing examples, real applications of benchmark data sets provided where the assumptions of the learning setting (having related tasks, only X(q).T@Y(q) and ẍ(q).T@ẍ(q) available, perhaps X(q) != ẍ(q)) would hold. Thus the question about practical significance of the exact considered problem setting and proposed solution remain unclear.
- Related to above point - no experiments on real-world data that would demonstrate the benefits of the approach
- Somewhat restrictive assumptions such as assumptions about the linearity of the underlying model and relationships between the tasks
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: Can you clarify on the assumption that the discovery and proxy data sets can be separate, when would this in practice be an important consideration?
Figure 3: am I reading this correctly that the holdout method does not benefit at all from having larger validation sample size? Any explanation for this?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 2 fair
Limitations: -
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Regarding the real-world application: To address this concern, we have included a real data application to polygenic risk prediction. Details of our analysis and the results are provided in the global response.
**Q1** In statistical genetics applications, many studies that investigate the marginal relationships between covariates and the outcome will report $X^TY$ but do not report $X^TX$. For this reason, the sample covariance must be computed from an external reference data set, such as the 1000 Genomes data [1]. The formulation of our problem and the theory that we develop captures this discrepancy between the datasets.
**Q2** The holdout method does not benefit from a larger sample size because the estimation accuracy will depend only on the sample size of the training dataset, as long as the tuning parameter is chosen to be the right order of magnitude. For the results in Figure 3, the grid of tuning parameters over which the holdout validation was performed was specified to be roughly the appropriate order.
[1] The 1000 Genomes Project Consortium et al., “A global reference for human genetic variation,” Nature, vol. 526, no. 7571, pp. 68–74, Oct. 2015, doi: 10.1038/nature15393.
---
Rebuttal Comment 1.1:
Comment: Thank you for the addition of the real-world application and for the clarifications provided. I agree that these improve the quality of the work and have raised my rating accordingly to "6: weak accept". | Summary: The paper proposed a multi-task method to learn individual models without having access to the raw data but using summary statistics data for each task. The method is only applicable to linear models.
Strengths: 1. Theoretical guarantee for the optimal estimator
2. Good experiments on simulated data
Weaknesses: 1. Missing experiments on real data
2. Applicable only to linear models.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. Why is the discovery data estimated from the actual observed data, unlike the sample covariance matrix was estimated from a proxy data?
2. The proposed method works only for linear models. Can the same analysis be applied using non-linear (deep learning) models?
3. In Figure 1, can you explain the behavior that the proxy covariance gave lower MSE than the tru covariance?
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Regarding experiments on real data: We have included additional results demonstrating our method on a prediction task with real genetic data. We have provided a description of the analysis and the results in our global response.
**Q1** In statistical genetics applications, it is often the case that $X^TY$ is reported from studies that analyze the marginal associations between the covariates, which often correspond to genetic markers, and the outcome of interest. However, these studies rarely report the covariance structure between genetic markers, and this information needs to be obtained from additional external datasets, such as the 1000 Genomes data [1]. Our formulation captures this discrepancy between the data used to compute the marginal associations and the covariance matrix.
**Q2** Our results are tailored towards linear models, but our approach may be applied to any estimator whose loss function depends only on the sample covariance matrix and summary association statistics. We anticipate that similar results may be possible for other classes of (non-linear) models such as deep neural networks. However, to fit a neural network using only summary-level information, practitioners need to be able to share new classes of summary statistics that correspond to more complex loss functions. The problem could be formulated as a federated learning problem [2], in which data owners iteratively share gradient-type information for model updating. However, this approach differs from our motivation, which aims to leverage pre-existing summary stats instead of requiring continuous sharing during model training.
**Q3** The behavior in Figure 1 is due to a pre-processing step in the simulation that regularizes the proxy data for better numerical stability across replications. If this pre-processing step is not performed, the behavior of the proxy data estimator converges to that of the true covariance, as predicted by our theory. In the new simulation result described in our global response, we do not perform this pre-processing step, and the estimator behaves as expected according to our theory.
[1] The 1000 Genomes Project Consortium et al., “A global reference for human genetic variation,” Nature, vol. 526, no. 7571, pp. 68–74, Oct. 2015, doi: 10.1038/nature15393.
[2] M. I. Jordan, J. D. Lee, and Y. Yang, “Communication-Efficient Distributed Statistical Inference,” Journal of the American Statistical Association, vol. 114, no. 526, pp. 668–681, Apr. 2019, doi: 10.1080/01621459.2018.1429274. | Summary: This paper presents an approach to learning predictive models from summary statistics in the setting of multi-task learning. Linear model is assumed and least square solution was derived with either the \ell_{2,1}-norm or kernel norm of the parameter matrix to capture relativeness among tasks. Theoretical results for bounding the estimator when proxy data are used to obtain the covariance matrix and hyperparameter tunning are provided. Synthetic data was used in their empirical evaluation.
Strengths: + The studied problems, both training predictive models from summary statistics and the use of proxy data to estimate covariance are of great importance, given the constraints on data sharing in medical research and the lack of availability of covariance in current practice of GWAS.
+ Theoretical results can be very useful in assessing the impact of the use of proxy data.
+ The approach proposed to tune the hyperparameters is important given there is typically no validation set consisting of individual level data for such tunning.
Weaknesses: - Even though the studied problems are significant, the proposed approach lacks technical innovations, seeming trivial extensions of existing works for both the bound derivation and hyperparameter tunning. The authors may point out the technique challenges in these extensions, highlighting their technical contribution.
- Lack of evaluation on practical datasets. It is unclear how useful this method is in practical problems.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: - No test set is mentioned in the description of their experiment. Is there a separate test set? I do not think training MSE is a good measure as it is affected by the sparse driving term in the optimization.
- I found the results shown in Figure 2 is a bit confusing. Given n = \tilde{n}, \tilde{p} = 1 means the proxy data is exactly the true data, meaning the proxy covariance is exactly the true covariance. Also, covariance is the only input from the proxy data to obtain the solution according to the estimator. These together mean the solution obtained with proxy_cov should be the same as that with true_cov at \tilde{p}=1. How the two plots show the former is the same as that with IL_cov?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: NA
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the valuable feedback, and for recognizing the strengths of our paper and the importance of the problem that we address.
Regarding the technical innovation of our paper: The proofs of the theoretical results present unique challenges in the proxy data setting. The derivation of the error bounds relies on a careful analysis of the contributed variance from each data point $X_i$ and $\tilde{X}_i$, which to our knowledge is entirely novel for high dimensional linear models. The details of this analysis can be found in the proof of Lemma A.4 in our supplement. The closest related result to ours is Theorem 2.1 of [2], but our results generalize this theorem by accounting for the overlap between the reference data and the discovery data and by allowing for multiple outcomes. Our derivations for the hyperparameter tuning procedure are generalizations of the techniques used in [1]. To our knowledge, this is the first application of Lepski’s method to high-dimensional regression problems beyond the Lasso and is the first time that the use of Lepski’s method has been motivated by data-sharing restrictions. Our proofs for the model tuning procedure rely on the properties of general convex regularizers, analogous to the framework provided in [3].
Regarding evaluation on practical datasets: Thank you for this suggestion. We have performed a data analysis demonstrating the performance of our method on a real genetic dataset, where the goal is to predict low-density lipoprotein (LDL) levels using genotype data. The results and an in-depth description of the analysis are given in our global response.
**Q1** Our previous simulation results present the mean-squared error of parameter estimation per task; in other words, we present the quantity $\frac{1}{nQ}\|\|\hat{B} - B^*\|\|^2$. We use the entire dataset to compute $\hat{B}$, and so we did not use a test set. We will clarify this in the final version of our manuscript. To address your concerns, our new real data application uses a standard train-test split to evaluate the prediction performance. We provide more details in the global response. Additionally, we have performed an additional simulation study that analyzes the impact of the proxy data sample size on prediction performance for each of our estimators, under the regime of no-overlap between the proxy and discovery data. The details and results of this simulation are in the additional PDF provided in the global response.
**Q2** When $\rho = 1$, the proxy data is the same as the individual-level data, meaning that $\hat{\Sigma} = \tilde{\Sigma}$. In this case, the Proxy_Cov estimator is expected to perform as well as the IL_Cov estimator, which is confirmed by our simulation results in Figure 2. The ‘true’ covariance label refers to the underlying population-level covariance which we do not observe in practice. The True_Cov estimator uses this population-level matrix as the input to our estimator. This is not possible in practice, but we include the results on our figure to demonstrate that overlap between the proxy data and discovery data is more important for good statistical performance than having an increasingly large reference panel.
[1] M. Chichignoud, J. Lederer, and M. Wainwright, “A Practical Scheme and Fast Algorithm to Tune the Lasso With Optimality Guarantees.” arXiv, Nov. 08, 2016. Accessed: Apr. 24, 2023. [Online]. Available: http://arxiv.org/abs/1410.0247
[2] S. Li, T. T. Cai, and H. Li, “Estimation and Inference with Proxy Data and its Genetic Applications,” arXiv:2201.03727 [math, stat], Jan. 2022, Accessed: Mar. 07, 2022. [Online]. Available: http://arxiv.org/abs/2201.03727
[3] S. N. Negahban, P. Ravikumar, M. J. Wainwright, and B. Yu, “A Unified Framework for High-Dimensional Analysis of $M$-Estimators with Decomposable Regularizers,” Statist. Sci., vol. 27, no. 4, Nov. 2012, doi: 10.1214/12-STS400.
---
Rebuttal Comment 1.1:
Title: Re: Rebuttal by Authors
Comment: Thanks to the authors for taking time to respond to my comments. More detailed description of synthetic data generation is desired. What's the role of true (population) \Sigma in this process? How was it specified/computed? Any thoughts on why the use of true \Sigma led to poorer results?
---
Reply to Comment 1.1.1:
Comment: We thank the reviewer for responding to our rebuttal. The true (population-level) $\Sigma$ matrix is generated as a random positive definite matrix by drawing a matrix $A \in \mathbb{R}^{n \times p}$ with $N(0,1)$ entries and computing $\Sigma = A^TA + I_p$. We add the identity matrix to ensure that $\Sigma$ is positive definite. We then use the matrix $\Sigma$ to generate the rows of $X$ and $\tilde{X}$ as independent $MVN(0, \Sigma)$ random variables. The use of $\Sigma$ leading to poorer estimation and prediction results (compared to using the matrix $\hat{\Sigma}$ which is directly obtained from discovery data $X$, i.e. the \rho=1 case) is predicted by our theory (see Theorems 3.1 and 3.2). When $\Sigma$ is used as the input to our estimators, this corresponds to the infinitely large proxy data regime with no overlap between the discovery data and proxy data (in other words, $\tilde{n} \rightarrow \infty$ and $\rho = 0$). The form of $\gamma$ in Theorems 3.1 and 3.2 implies that the convergence rate of the estimator in this regime is strictly worse than the optimal minimax rate, which is only achieved if the proxy data and the discovery data are the same. The results of our updated simulation study (see Figure 1 in the newly attached pdf file) support our theory.
Thank you again for your comment and please let us know if we can provide any further clarification. | Summary: The paper addresses the problem of multi-task learning in settings where only summary statistics (instead of individual-level data) is available, which is a common scenario e.g. in medicine. The paper presents a framework for linear relationships between the covariates and the outcomes which uses summary-level information from distinct sources with a general data-driven tuning scheme for selecting tuning parameters. The results from the theoretical analysis of this framework are confirmed with numerical experiments on synthetically generated data.
Strengths: The paper addresses an important problem for fields where individual level data is not publicly available, which is often the case in health care applications. The numerical experiments confirm the theoretical analysis.
Weaknesses: While the paper addresses an important topic, the presented framework comes with strong limitations. Particularly, the framework can only be applied for linear models which drastically reduces the potential applications of this approach. Additionally, the experimnts are limited to rather simple, synthetic examples that indeed show of the expected behaviour based on the theoretical results, but do not provide any insights on the applicability of the framework in real-world scenarios. Finally, the presentation could be improved (see comments below).
Comments on presentation:
- on related work: first, related work should also be a numbered section (not an unnumbered subsection of introduction). second, there was also quite recently some work by Meija et al. (2022) on estimating causal effects only based on summary statistics reported in medical studies, that seems quite related to the presented problem setup. There the authors show that merging datasets with maximum entropy improves the predictive power compared to using the observed marginal distributions as predictors.
- Line 99: public available -> publicly available.
- After equation (1) you already start using variables with and without tilde ($\tilde{\bf X}$ and ${\bf X}$, $\tilde{\mathcal{D}}$ and $\mathcal{D}$), but you have not really introduced what the difference between them is and only introduce proxy and discovery data (without explaining the terms) in section 3. You should introduce the proxy data (and what that means) much earlier.
- Line 124: in a data-driven in the ... -> in a data-driven way in the ...
- Line 135 and equation above: In the equation use \left( and \right) for the brackets to scale them appropriately. Further, you use $\varepsilon$ as random noise before (above line 88), and now define $\Varepsilon$ as cost of using proxy data that is different from the discovery data. That's confusing. Similarly confusing is that you have now $\tilde{\Sigma}$, $\Sigma_1$ and $\Sigma_2$. Isn't $\Sigma_2$ the same as $\tilde{\Sigma}$? (if not, please clarify what's the difference).
- Line 175: what is $\tilde{p}$? wasn't there only $p$ and $\tilde{\rho}$ before?
- position parameters of figure 2 and 3 should be set to [t]
- Overall structure could be improved. Suggest to have a section on notation and assumptions used throughout the paper before the current section 2. Then you can introduce the problem setup. In that section you right now already mention how summary statistics get included. One could consider doing that only in the next section that focusses on your approach. This way it should be (hopefully) clearer what you propose and what is new about it. Right now it's difficult to understand what are common assumption, what is common knowledge, and what was added on top by you.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: - The framework is designed for multi-task learning with linear relationships between the covariates and the outcomes. Can you provide real-world examples that satisfy this constraint?
- You note that ${\bf X}^{(q)}$ and $\tilde{\bf X}^{(q)}$ are not necessarily the same. Can you give a practical example where they are the same, and one where they are not?
- To obtain equation (2), you replace ${\bf X}^T{\bf X}$ with $\tilde{\bf \Sigma}$, although you before mention that $\tilde{\bf \Sigma}\propto \tilde{\bf X}^T \tilde{\bf X}$, and ${\bf X}$ and $\tilde{\bf X}$ are not the same. Why can you do this replacement here then?
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 2 fair
Limitations: authors adequately addressed the limitations and potential negative societal impact of their work
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for excellent feedback about the presentation of our paper, and for raising several important questions. Here we address each of the comments on the paper’s presentation:
* (Regarding the related work) Thank you for your feedback on formatting and for pointing out the recent work of Meija et al (2022). We will include this paper in our literature in the final version of our paper.
* (Regarding the comment on Line 99) Thank you for pointing out this typo, this will be fixed.
* (Regarding the comment on equation (1)) Thank you for this feedback, we will introduce a notation section in the final version of our paper to better clarify the difference between the discovery data X and the proxy data \tilde{X}.
* (Regarding the comment on Line 124) We will fix this typo.
* (Regarding the comment on Line 135) We will change the notation from $\mathcal{E}$ to a different character in the accepted version of our manuscript to prevent any confusion with the error term $\varepsilon$. $\Sigma_1$ is the population-level covariance of $X$, meaning that $\Sigma_1 = E[\frac1n X^TX]$. Similarly $\Sigma_2 = E[\frac1n\tilde{X}^T\tilde{X}]$ is the population-level covariance of the proxy data. We will clarify this in the notation section in our final version of this paper.
* (Regarding the comment on Line 175) Thank you for pointing out this typo, this character should be $\tilde{\rho}$.
**Q1** In our target application, which is genetic risk prediction, genetic variants contribute to observed traits additively, making linear modeling the most effective choice [3,4,5]. We will include a discussion of this assumption in the final version of our paper.
**Q2** In biomedical applications, $X^TY$ can be obtained from studies that focus on marginal
associations between covariates and the outcome of interest. However, it is often the case that
these same studies may not report the correlations between the covariates. In this case, such
information has to be derived from other studies or reference datasets.
For instance, in genetic studies, many research papers provide GWAS summary statistics
(https://www.ebi.ac.uk/gwas/), which offer insights into the associations of genetic variants with
the outcome. However, the correlations between these genetic variants are often reported in
only a few studies, such as the UK Biobank [2] and the 1000 Genomes data [1].
**Q3** The reviewer is correct that $X$ and $\tilde{X}$ are not the same, and this is due to
the availability of summary statistics---studies that report $X^TY$ may not report $X^TX$,
and we need to find a proxy for $X^TX$ from publicly available data. This replacement is how our estimator is defined.
The main contribution of our theory essentially is to quantify the
fundamental cost of performing this replacement. If both $X^TY$ and $X^TX$ are
available from the same study, our theory suggests using the summary stats
from the same study, instead of using a proxy dataset.
[1] The 1000 Genomes Project Consortium et al., “A global reference for human genetic variation,” Nature, vol. 526, no. 7571, pp. 68–74, Oct. 2015, doi: 10.1038/nature15393.
[2] C. Sudlow et al., “UK Biobank: An Open Access Resource for Identifying the Causes of a Wide Range of Complex Diseases of Middle and Old Age,” PLoS Med, vol. 12, no. 3, p. e1001779, Mar. 2015, doi: 10.1371/journal.pmed.1001779.
[3] Chatterjee, Nilanjan, Jianxin Shi, and Montserrat García-Closas. "Developing and evaluating polygenic risk prediction models for stratified disease prevention." Nature Reviews Genetics 17.7 (2016): 392-406.
[4] Torkamani, Ali, Nathan E. Wineinger, and Eric J. Topol. "The personal and clinical utility of polygenic risk scores." Nature Reviews Genetics 19.9 (2018): 581-590.
[5] Choi, Shing Wan, Timothy Shin-Heng Mak, and Paul F. O’Reilly. "Tutorial: a guide to performing polygenic risk score analyses." Nature protocols 15.9 (2020): 2759-2772. | Rebuttal 1:
Rebuttal: We would like to thank the reviewers for their valuable and insightful feedback. In this global response we would like to describe the major changes we made to address the reviewers’ comments, which greatly improved the quality of our work.
**First, we add a new simulation study evaluating our method based on out-sample prediction accuracy.** The setup of the simulation is the same as what has been considered in the previous submission. We simulated a dataset with $n_{min} = 100$, $p = 100$, $\tilde{n} = \tau n_{min}$ for $\tau \in (0.5, 1, 5, 10)$ and $\tilde{\rho}_q= 0$ for each q. The number of tasks was fixed at 8. We generated a row-sparse matrix with 10 nonzero rows and a rank 2 $B^*$ for the sparse and low-rank multi-task estimators. After fitting our model and obtained the parameter estimate for $B^*$ , we generate a new test set of size 100 according to the same data generating process and evaluate our estimator's ability to predict the outcome in this test set. The results are given in the first figure in our new additional PDF document. We found that the relative performance of our method compared to the individual level estimator and the true covariance estimator is similar to the estimation task we showed in our previous submission. The performance of the proxy data estimator improves as the size of the proxy data grows, matching that of the oracle estimator that uses the true covariance matrix as its input. We observe the same performance gap between the proxy data estimator and the individual-level data estimator.
**Secondly, we have applied our method to analyze real genetic data to demonstrate the real-world applicability of our method.** We use a multi-site data obtained from the electronic Medical Records and Genomics (eMERGE) network [1], which includes individual-level genotype data from multiple research sites in the United States. Our goal is to predict levels of low-density lipoprotein (LDL) across five adult sites, treating the data from each site as a separate task. We split the data (with sample sizes $n_1 = 3813, n_2 = 546, n_3 = 2666, n_4 = 1435, n_5 = 525$) at each task into a training and test set (with a test set data size of 100 for each task) and evaluate the performance of our method using the prediction MSE on the test set.
The training data from each site is used to construct the discovery summary statistics $X_q^TY_q$ for each task. For approximating $\Sigma_q$, we choose two different approaches: one is to use the half of the genotype data from each site (this approach is labeled as Proxy_MTL1); the other approach is to use $X_1$ (genotype data from site 1) to approximate $\Sigma_q$ for all the sites. This approach is labeled as Proxy_MTL2. We use these two approaches to demonstrate a potential trade-off in the construction of the reference panel: Proxy_MTL1 uses a well-specified reference dataset with a smaller sample size; Proxy_MTL2 uses a larger reference dataset that may suffer from a distribution shift. For comparison, we also fit a multi-task learning estimator that uses all of the individual level training data for each task, which is labeled ‘Individual_MTL’, and we fit a ridge regression estimator that models each task separately, for comparison to our multi-task learning approach. The ridge estimator uses the proxy sample covariance instead of the individual level covariance matrix for a fair comparison with our method. We repeat the train-test split process 10 times, which admits the distribution of prediction MSE values that we report in the figure. We use the nuclear norm penalized multi-task learning estimators in this application, because we believe that the genetic effects in this dataset are dense.
Our results demonstrate that our method is highly practical when only summary-level information is available, as the prediction MSE of our method is nearly the same as the estimator which uses the individual-level data, despite a slight cost in performance. Furthermore, all multi-task learning estimators outperform the ridge-estimator, confirming that multi-task learning is a strong approach when there is shared structure between tasks.
**Finally, we clarified the motivation and provided real-world justifications for considering the discovery data to be different from the proxy data, and we clarified our technical innovation.** Due to the space limit, we refer it to the point-to-point responses to each reviewer.
Once again, we thank the reviewers for their comments, and we look forward to further discussion.
[1] McCarty, C.A., Chisholm, R.L., Chute, C.G. et al. The eMERGE Network: A consortium of biorepositories linked to electronic medical records data for conducting genomic studies. BMC Med Genomics 4, 13 (2011). https://doi.org/10.1186/1755-8794-4-13
Pdf: /pdf/e9917f91459ee33d32db0ba5bbfcb9595a6c59ae.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: Multi-task learning is a powerful machine learning paradigm for integrating data from multiple sources to improve overall model performance. However, data-sharing constraints in healthcare settings hinder its application. To address this challenge, a flexible multi-task learning framework utilizing summary statistics from various sources is proposed, along with an adaptive parameter selection approach based on a variant of Lepski's method. A systematic non-asymptotic analysis characterizes the performance of the proposed methods under various regimes of sample complexity and overlap. Extensive simulations demonstrate the theoretical findings and the performance of the method, offering a flexible tool for training related models across various domains with practical implications in genetic risk prediction and other fields.
Strengths: Multi-task learning is a promising approach to integrating data from multiple sources and improving individual task performance.
Data-sharing constraints in healthcare and biomedical research limit access to individual-level data, making summary statistics a valuable substitute. The proposed multi-task learning framework allows for simultaneous learning of multiple models using only publicly available summary statistics. A systematic non-asymptotic analysis characterizes the performance of the proposed methods under various regimes of sample complexity and overlap. An adaptive scheme for tuning parameter selection based on a variant of Lepski's method is proposed, allowing for data-driven tuning when only summary statistics are available. The framework has practical applications in genetic risk prediction and can be used to develop trans-ethnic prediction models. The ability to learn from summary statistics offers a versatile tool for developing models across various domains.
Indeed, a major advantage of this method is that it has theoretical guarantees. Through systematic non-asymptotic analysis, this method provides theoretical guarantees for the performance of the multi-task learning framework based on summary statistics, especially under different regimes of sample complexity and overlap. These theoretical guarantees help us better understand the performance of this method and provide guidance for practical applications. Additionally, this method proposes an adaptive parameter selection approach based on a variant of Lepski's method, which can perform data-driven tuning of parameters when only summary statistics are available. The effectiveness of this method is also theoretically guaranteed, which can help us better select parameters and improve the performance of the model in practical applications. Therefore, this method has great practical value and provides a feasible approach for multi-task learning under data sharing constraints.
Weaknesses: 1. One major drawback of this method is the lack of comparison with classical multi-task learning methods. While this method provides a novel and useful approach for dealing with data sharing constraints, comparing its performance with existing multi-task learning methods would help to further evaluate its effectiveness. By comparing their performance, we can gain a better understanding of the strengths and weaknesses of these methods and provide better guidance for practical applications. Therefore, future research could consider comparing this method with existing multi-task learning methods.
2. Another potential drawback of this method is that it may not be able to integrate with existing deep learning models. This method is based on training multiple models using basic summary statistics, while deep learning models typically require large amounts of raw data and complex network structures for training. Therefore, integrating this method with existing deep learning models may face many challenges, such as how to convert summary statistics into input data and how to design network structures suitable for summary statistics. While this method is useful in utilizing summary statistics, it may not be applicable in tasks that require processing large amounts of raw data and performing complex computations. Therefore, in practical applications, appropriate methods and models need to be selected based on task requirements.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. How does the performance of this method compare to existing classical multi-task learning methods?
2. Can this method be integrated with existing deep learning models, and if so, how?
3. In what type of tasks is this method most suitable, and what type of tasks may require other methods and models?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Q1** Our methods build upon classical multi-task learning techniques, and enable fitting models only using basic summary statistics which are often made publicly available. The sparse $\ell_{2,1}$ regularized estimator extends the group-sparse estimators studied in [4,6], while the nuclear norm estimator expands on the low-rank regression model described in [1]. The nuclear norm approach is closely related to the linear representation learning problem [2,5], constraining regression coefficients to a shared low-dimensional subspace.
The main motivations behind employing these multi-task learning methods but not others are
threefold: (1) In genetic risk prediction modeling, the additive nature of genetic effects makes
linear modeling the most effective choice. (2) Across populations and related phenotypes,
similarities in genetic architectures can be characterized by distance measures in model
parameters, leading to the enforcement of shared structures through penalty terms. (3) The
widespread availability of GWAS summary statistics, capturing the marginal correlation between each SNP-phenotype pair, further supports our approach.
In terms of other multi-task learning methods, our general approach of using proxy variables to compute the second-order structure of the predictor variables may be applied to Multi-Task Kernel Ridge Regression in Reproducing Kernel Hilbert Spaces, following [3]. In particular, we may use the proxy variables to compute the kernel matrix between the features. However, the limited availability of publicly accessible proxy kernel matrices in scientific applications hinders the potential usefulness of this method.
**Q2** In real-world applications, fitting a deep learning model with only $X^TX$ and $X^TY$ available
poses significant challenges, if not impossibilities. The summary stats represent highly
condensed summaries of linear relationships, whereas the effectiveness of deep learning
models lie in their capacity to capture non-linear relationships. Hence, relying solely on such
information is intuitively insufficient for successful deep learning model training.
Nevertheless, we hold the anticipation that training neural models could be feasible if new classes of summary statistics are created and incorporated. This approach would require practitioners to explore and develop innovative summary-level information.
In a more extreme scenario, the problem could be formulated as a federated learning problem [7], wherein data owners iteratively share gradient-type information for model updating. However, this approach differs from our motivation, which aims to leverage pre-existing summary stats instead of requiring continuous sharing during model training.
An intriguing avenue worth exploring is the integration of external basic summary statistics to assist in the training of neural models using internal datasets. This direction is currently under investigation, and we are actively exploring its potential implications.
**Q3** Our method is particularly well-suited for applications where the dominant signals exhibit
linearity. For instance, in genetic risk prediction modeling, linear models have proven to be
effective. This choice is driven by the constraint of utilizing only publicly available summary
statistics. We advocate that in many scenarios, linear models serve as robust and stable
working models. With our method, researchers without access to individual-level data sources
can still gain insight from summary statistics in terms of classification or risk modeling.
Moving forward, we recognize the importance of developing novel methods and theories for
other canonical statistical problems. These endeavors may necessitate summary statistics in
new forms, tailored to specific tasks, and represent crucial areas for future research.
[1] S. Negahban and M. J. Wainwright, “Estimation of (near) low-rank matrices with noise and high-dimensional scaling,” Ann. Statist., vol. 39, no. 2, Apr. 2011, doi: 10.1214/10-AOS850.
[2] S. S. Du, W. Hu, S. M. Kakade, J. D. Lee, and Q. Lei, “Few-Shot Learning via Learning the Representation, Provably.” arXiv, Mar. 30, 2021. Accessed: Feb. 10, 2023. [Online]. Available: http://arxiv.org/abs/2002.09434
[3] Charles Micchelli and Massimiliano Pontil, “Kernels for Multi-task Learning,” Advances in Neural Information Processing Systems, vol. 17, 2004.
[4] K. Lounici, M. Pontil, S. van de Geer, and A. B. Tsybakov, “Oracle inequalities and optimal inference under group sparsity,” The Annals of Statistics, vol. 39, no. 4, pp. 2164–2204, Aug. 2011, doi: 10.1214/11-AOS896.
[5] N. Tripuraneni, C. Jin, and M. I. Jordan, “Provable Meta-Learning of Linear Representations.” arXiv, Dec. 31, 2021. Accessed: Sep. 08, 2022. [Online]. Available: http://arxiv.org/abs/2002.11684
[6] K. Lounici, M. Pontil, A. B. Tsybakov, and S. van de Geer, “Taking Advantage of Sparsity in Multi-Task Learning.” arXiv, Mar. 08, 2009. Accessed: Sep. 08, 2022. [Online]. Available: http://arxiv.org/abs/0903.1468
[7] M. I. Jordan, J. D. Lee, and Y. Yang, “Communication-Efficient Distributed Statistical Inference,” Journal of the American Statistical Association, vol. 114, no. 526, pp. 668–681, Apr. 2019, doi: 10.1080/01621459.2018.1429274. | null | null | null | null | null | null |
Towards Last-layer Retraining for Group Robustness with Fewer Annotations | Accept (poster) | Summary: The paper tackles the aspect of last-layer retraining for better group robustness. The authors show that last-layer retraining can greatly improve worst-group accuracy with little worst-group data. Motivated by this, selective last-layer finetuning (SELF) is proposed to improve group robustness.
Strengths: 1. The paper focuses on the interesting aspect of improving robustness via last-layer retraining under spurious correlation. The problem is specific and practical.
2. The experiments are well done. Also, the comparisons are consistent through out the paper.
3. The findings and proposed methods are well presented. The finding of "last-layer retraining can substantially improve WGA even when the reweighting dataset has only a small proportion of worst group data" is interesting.
Weaknesses: 1. Novelty:
(1) Last-layer retraining/finetuning is a common techqniue used in imbalanced/long-tailed learning and various properties were found, see [1][2]. This put doubt on the novelty of the claimed finding: "holding out a subset of training data to retrain the last layer can substantially outperform ERM on the entire dataset with no additional data, annotations, or computation for training". I wonder what are the differences if we treat tail classes in [1][2] as the worst group.
(2) The proposed method: Disagreement-based methods were proposed and they were only adopted to the setting with spurious correlation. This adaptation is fine, but the performance in Table 4 cannot convince me since SELF cannot beat DFR in 2 out of 4 tasks.
2. Writing:
(1) Figure 1(a) misses legends. For Figure 1(b), please note that y-axis means relative increase to avoid confusion.
(2) The contribution part is too long.
[1] Decoupling Representation and Classifier for Long-tailed Recognition. ICLR 2020
[2] BBN: Bilateral-Branch Network with Cumulative Learning for Long-Tailed Visual Recognition. CVPR 2020
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: Is worst-group accuracy a valid metric for evaluating robustness? A common phenomenon in long-tailed learning is that increasing tail-class accuracy often leads to decrease in head class accuracy. I recommend also adding overall accuracy as a metric for comparison. Also, WGA cannot reflect the uniformness of the overall performance.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 2 fair
Limitations: The limitations are included in the appendix.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We warmly thank Reviewer HP8n for their detailed comments, suggestions, and references. Below, we provide responses to each of the reviewer’s points.
(Novelty 1) Thank you for the comments and the references [1, 2]; we have added citations and discussion to Section 2. We remark that the reviewer’s concern is discussed at length in the DFR paper [3] which introduced last-layer retraining to the spurious correlations setting. In their Appendix A, they argue that these methods are not directly applicable to spurious correlations robustness, and many of their points apply to our work. For example, a crucial difference is that [1] retrains on data from the training set, whereas we retrain on held-out data only. Nevertheless, we provide a worst-group accuracy comparison between the methods of [1] and our proposed methods on Waterbirds and CelebA below.
| Method | Group Annotations | Retraining Data | Waterbirds WGA | CelebA WGA |
|--------------------------|-------------------|-----------------|----------------|---------------|
| LWS [1] | No | Train | 40.0 +/- 8.1 | 35.6 +/- 14.4 |
| cRT [1] | No | Train | 74.5 +/- 1.5 | 52.9 +/- 6.0 |
| CB last-layer retraining (ours) | No | Held-out | 92.6 +/- 0.8 | 73.7 +/- 2.8 |
| ES disagreement SELF (ours) | No | Held-out | 93.0 +/- 0.3 | 83.9 +/- 0.9 |
| DFR (our impl.) | Yes | Held-out | 92.4 +/- 0.9 | 87.0 +/- 1.1 |
Regarding the reviewer’s suggestion of treating tail classes as the worst group, we would like to clarify that the “worst group” is not a fixed subset of the data like tail classes are. Instead, the “worst group” is dynamic, and corresponds to the group attaining the minimum accuracy over all groups. Therefore, we cannot treat tail classes as the worst group, since we cannot explicitly control which group is the worst group. Furthermore, a “group” in our work is an element of the Cartesian product of the classes and the values of the spurious feature, so depending on the spurious feature, it may be the case that the tail classes are not all in the same group.
(Novelty 2) We would like to clarify that the goal of our disagreement SELF method is **not** to beat DFR (i.e., achieve SOTA performance in the presence of group annotations). This is an unrealistic expectation, as we train with no group annotations on the reweighting dataset -- much less information than DFR, which requires group annotations for the entire reweighting dataset. Therefore, we consider DFR as an oracle method, and our goal is to get as close as possible to the oracle level. In this respect, the most important comparison to make is to other methods that also do not use group annotations, which we do in Table 1. The table shows that the performance of our methods are indeed SOTA on 3 out of 4 of the benchmarks among methods not using group annotations). Considering the exception (CelebA), we remark that the large gap to CnC on the CelebA dataset also exists in concurrent work [4], suggesting that it may be an inherent limitation of last-layer retraining and not a shortcoming of our method (since CnC modifies the features, while last-layer retraining does not). Our methods outperform all other methods with the exception of CnC on the CelebA dataset.
(Writing 1) We thank the reviewer for the catch, and we have updated the figures according to the reviewer’s suggestions.
(Writing 2) We thank the reviewer for the suggestion, and we have reduced the length of the contributions section by shortening the paragraphs.
(Question) Thank you for the question and suggestion. We have included average accuracy numbers for ERM, class-balanced last-layer retraining, ES disagreement SELF, and DFR in Rebuttal Table 1. Our methods achieve comparable average accuracy to DFR, which is a favorable result as DFR is considered one of the SOTA methods for group robustness. We hope the average accuracy results assuage the reviewer’s concerns about how WGA alone does not reflect the uniformity of the group-accuracy distribution.
[1] Kang et al. “Decoupling representation and classifier for long-tailed recognition.” ICLR 2020.
[2] Zhou et al. “BBN: Bilateral-Branch Network with Cumulative Learning for Long-Tailed Visual Recognition.” CVPR 2020.
[3] Kirichenko et al. “Last Layer Re-training is Sufficient for Robustness to Spurious Correlations.” ICLR 2023.
[4] Qiu et al. “Simple and Fast Group Robustness by Automatic Feature Reweighting.” ICML 2023.
---
Rebuttal Comment 1.1:
Title: Update
Comment: Thank the authors for their detailed response. Most of my concerns are resolved and I will raise my score. | Summary: This paper proposes to collect a dataset from misclassifications or disagreements to fine-tune a classifier for improving sub-group accuracy in presence of spurious correlations.
Strengths: The topic that this paper attempts to address is important. The paper is well-written as well. There are some interesting findings in the paper as well such as finetuning using a dataset collected from disagreements could potentially improve group-accuracy.
Weaknesses: The level of novelty presented in the paper is insufficient. My primary concern stems from the method's reliance on group labels during model selection, also known as the validation set. This requirement renders these techniques nearly impractical in real-world scenarios, as defining groups is a non-trivial task even in datasets with just a hundred samples, and this becomes much more challenging for current datasets comprising billions of data points.
The Waterbirds dataset utilized in this study has been identified to contain certain bugs, which were subsequently addressed in the MaskTune paper [1]. Consequently, the current values provided may be misleading. It is worth noting that the MaskTune paper, contrary to the assumption, does not incorporate any group labels during training or model selection (validation). It would be valuable for the authors to present the worst group accuracies achieved by SELF without the utilization of any training or validation group labels. Additionally, I encourage the authors to compare these results with the methods outlined in Tables 1 and 2 of [1].
[1] Asgari, Saeid, et al. "Masktune: Mitigating spurious correlations by forcing to explore." Advances in Neural Information Processing Systems 35 (2022)
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Please refer to the Weaknesses section.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: Please refer to the Weaknesses section.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We warmly thank Reviewer x5jy for their comments and suggestions. Below, we provide responses to each of the reviewer’s points.
(Setting) We thank the reviewer for the reference [1], and we have added a citation and discussion of [1] in Section 2. With that said, we explicitly focus on the setting where group annotations are available for model selection; this setting has been standard in the literature since JTT [2] and works in this area have contributed significant practical insights to the literature despite the annotation requirement [2, 4, 5, 6, 8]. Our main goal in this paper is **not** to beat the results in [1, 7], but rather to investigate the surprising performance of last-layer retraining in a more restrictive setting (our Section 4) and propose a SOTA algorithm for the setting of [2, 4, 5, 6, 8] based on a novel disagreement-based approach (our Section 5).
(Model Selection) We have included the results of an ablation on the validation set size as Rebuttal Figure 1; the results show that ES disagreement SELF is robust to tuning hyperparameters with as little as 1% of the validation data (though variance increases as expected). Our SELF method improves greatly upon class-balanced ERM even with as few as 6 validation examples for Waterbirds and 99 examples for CelebA, massively reducing the group annotation requirement. The CivilComments and MultiNLI ablations will be done by next week and available upon request by the reviewers.
(Section 4) We noticed the reviewer did not have any comments on our results in Section 4, which provided the separate insight that last-layer retraining can improve group robustness even if the reweighting dataset is drawn from the training distribution (i.e., is group-imbalanced). We would be very interested to hear your perspective on this set of results. Given the reviewer’s concern about model selection, we would like to emphasize that our results in Section 4 tune only the split ratio, which we show to be robust in Appendix D, and we recommend on Line 242 that practitioners **tune no hyperparameters at all** (therefore requiring no group annotations in the validation set). This is in contrast to concurrent work [5] whose results corroborate our Section 4.2, but they tune two required hyperparameters ($\gamma$ and $\lambda$ in their Section 3.1).
(Waterbirds) Thank you for bringing the Waterbirds bugs to our attention: we have updated the paper to include a reference to this phenomenon. With that said, we believe testing on the original Waterbirds dataset is still important and useful for comparison to previous SOTA methods [2, 3, 4]. Furthermore, we test on 3 additional rigorous benchmark datasets across both vision and language domains and achieve SOTA on 2/3 of these among methods in the setting where group annotations are used for model selection but not for training.
[1] Taghanaki et al. “MaskTune: Mitigating Spurious Correlations by Forcing to Explore.” NeurIPS 2022.
[2] Liu et al. “Just Train Twice: Improving Group Robustness without Training Group Information.” ICML 2021.
[3] Kirichenko et al. “Last Layer Retraining is Sufficient for Robustness to Spurious Correlations.” ICLR 2023.
[4] Zhang et al. “Correct-N-Contrast: a Contrastive Approach for Improving Robustness to Spurious Correlations.” ICML 2022.
[5] Qiu et al. “Simple and Fast Group Robustness by Automatic Feature Reweighting.” ICML 2023.
[6] Sohoni et al. “BARACK: Partially Supervised Group Robustness With Guarantees”. ICML 2022 SCIS Workshop.
[7] Lee et al. “Diversify and disambiguate: Learning from underspecified data.” ICLR 2023.
[8] Nam et al. “Spread Spurious Attribute: Improving Worst-group Accuracy with Spurious Attribute Estimation.” ICLR 2022.
---
Rebuttal Comment 1.1:
Title: Thank you for the rebuttal
Comment: _"Our main goal in this paper is not to beat the results in [1, 7], but rather to investigate the surprising performance of last-layer retraining in a more restrictive setting..."_
Given the stopping criteria for training (why stop at epochs 2, 4, 10?) and using labelled data for model selection, I don't see much novelty compared to [29], hence I keep my score. | Summary: The paper provides a detailed analysis of the DFR procedure [1]. The authors specifically consider the case when the group annotations are limited or not available during training, and show that it is still possible to largely remover the DFR performance. Specifically, the authors retrain the last layer on data with class balancing, and on data where examples are selected according to some rule (e.g. misclassified examples). The authors also provide many detailed ablations and understanding experiments that support their findings.
Strengths: **S1.** The authors provide new interesting insights into the DFR procedure. For example, the results in Figure 1 on the number of minority data needed are quite surprising: even with a small increase in the proportion of minority data it is possible to significantly improve worst-group performance. Also the results in Table 3 on the effectiveness of last-layer retraining without group rebalancing are thought-provoking. Finally, the results in Figure 3, which show that better group balancing does not always lead to better WGA.
**S2.** The authors propose new practical methods (SELF), which build on DFR, but relax the requirement for group-balanced data. The results are discussed in Section 5. The authors also perform detailed ablations on the design decisions for the method, such as what cost function $c$ to use for selecting a subset of the data for last layer retraining.
**S3.** SELF can work even when the reweighting data doesn’t have class labels, and request the labels on a small subset.
Overall, I believe this paper provides new insights that expand our understanding of DFR and more generally training models robust to spurious features. The proposed methodology is also promising.
Weaknesses: I believe there is one important missing experiment, and several technical details that should be explained or corrected. I explain these in detail below.
**W1.** Experiment: performance vs total number of group labeled examples
As every other group robustness method, SELF requires group-labeled data to perform hyper-parameter tuning. My understanding is that in all of the experiments, half of the validation data is used for hyper-parameter tuning. In other words, it is not true that the method can work without any group annotations (which is also true of JTT, CnC, and other methods). Moreover, SELF seems to involve more hyper-parameters than DFR (which only tunes regularization strength): length of last layer retraining, size of the reweighting dataset, learning rate, etc.
Consequently, I believe it is important to evaluate performance as a function of the total number of the group-labeled datapoints used by the method. In case of DFR, these datapoints would be used for last-layer retraining, and in case of SELF, they will only be used for hyper-parameter tuning. I think it may be reasonable to create a group-balanced validation set of size $n$, and plot WGA vs the size $n$. For example of a similar experiment, see Fig. 5 in [2].
I believe, this experiment is quite important for evaluating the methodological contribution of the paper.
**W2.** Civilcomments
I blieve, the results presented in Table 1 for CivilComments mix two different versions of the dataset. Specifically, JTT and CnC use the [Wilds version](https://wilds.stanford.edu/datasets/#civilcomments) with 16 overlapping groups, while DFR, RWY-ES, Group-DRO and the methods described in the paper use a version with 4 groups, where all the spurious attributes are combined together. As a result, there is a big discrepancy in performance. I would recommend either (1) reporting the results for all methods on the WILDS version of the dataset, or (2) removing the JTT and CnC entries and clarifying the difference in the versions of the data.
**W3.** Waterbirds
On waterbirds, the validation data is group balanced up to class balancing. As a result, by group-balancing, the authors are able to achieve DFR performance with class-balanced last-layer retraining (CB in Table 1). However, in this case the method is virtually identical to DFR.
I think it would be more meaningful to do the experiment using a base reweighting dataset that has the same group distribution as the training data. This is the case for all other datasets.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: **Q1. Early stopping:** What is the early stopping criterion for the models in Section 5?
**Q2. Free lunch:** For the free-lunch results in Section 4.2 do you need regularization or early-stopping, optimized on validation WGA? Or is it quite robust to hyper-parameters?
**Q3. Hyper-parameters:** Could you please list all hyper-parameters that a practitioner needs to tune for the SELF method?
Finally, the paper [2] appears to be quite relevant, as it also attempts to automatically construct a reweighting dataset for DFR. However, [2] is a concurrent work, as it was published only after the submission of this paper. So there are no issues with the novelty of the paper.
**References**
[1] [_Last Layer Re-Training is Sufficient for Robustness to Spurious Correlations_](https://openreview.net/forum?id=Zb6c8A-Fghk); P. Kirichenko, P. Izmailov, A. G. Wilson; ICLR 2023
[2] [_Simple and Fast Group Robustness by Automatic Feature Reweighting_](https://arxiv.org/abs/2306.11074); S. Qiu, A. Potapczynski, P. Izmailov, A. G. Wilson; ICML 2023
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations: Limitations are adequately discussed by the authors.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We graciously thank Reviewer Ep6f for their in-depth analysis, insightful comments, and attention to detail. Below, we provide responses to each of the reviewer’s points.
(W1) Thank you for this great suggestion. We have included the results of the requested ablation as Rebuttal Figure 1; the results show that ES disagreement SELF is robust to tuning hyperparameters with as little as 1% of the validation data (though variance increases as expected). Our SELF method improves greatly upon class-balanced ERM even with as few as 6 validation examples for Waterbirds and 99 examples for CelebA, massively reducing the group annotation requirement. The CivilComments and MultiNLI ablations will be done by next week and available upon request by the reviewers.
We would also like to clarify that length of last-layer retraining (i.e., the number of optimizer steps) is not a hyperparameter in our method, as the number of optimizer steps is kept constant as the reweighting dataset size changes. Please see (Q3) below for additional discussion on the hyperparameters that SELF tunes.
(W2) Thank you for this good catch regarding the CivilComments dataset. We have removed the CivilComments JTT and CnC entries from Table 1 and detailed the difference between the two versions of the dataset in a footnote. We would like to clarify that every method in Table 1 uses the WILDS version [1] of the dataset, but the 4-group version we study collapses all the spurious attributes (“female”, “LGBT”, etc.) into one attribute. The original (non-WILDS) CivilComments dataset [2] is not used.
(W3) We thank the reviewer for the suggestion. We actually included this experiment in the paper in Figure 2(a) with the corresponding table version in Appendix C Table 7, and we remark on the reviewer’s point on Lines 229-231. By performing class-balanced last-layer retraining on the training distribution, we improve WGA by 4.8% over class-balanced ERM, which is actually larger than the 3.3% increase gained by using the validation distribution.
(Q1) The early-stop criterion for SELF is the percentage of total training completed. Specifically, we use models stopped at 10%, 20%, and 50% of total training (corresponding to, e.g., 2, 4, and 10 epochs for CelebA). This procedure is detailed on lines 291-292 of the paper.
(Q2) The free-lunch results in Section 4.2 only tune one hyperparameter (split ratio) and do not utilize any early stopping or regularization, except for a weight decay value of 1e-4 (not tuned). As discussed on Line 242 and Appendix D, we found that a 95/5 split worked the best in general, and we therefore recommend practitioners to tune no hyperparameters at all. Please see the (Concurrent work) section below for additional discussion about the regularization and hyperparameters used in Section 4.2.
(Q3) There are three SELF hyperparameters: (1) reweighting dataset size, (2) learning rate, and (3) early-stop threshold (early-stop SELF only) or dropout probability (dropout SELF only). The number of optimizer steps is kept constant as the reweighting dataset size changes. This procedure is detailed on lines 286-293 of the paper.
(Concurrent work) We thank the reviewer for bringing up the reference [3], which we had also noticed after the initial submission of our work. We believe their results corroborate our findings in Section 4.2. A key difference between our methods is that we only tune one hyperparameter (split ratio) in Section 4, while [3] includes two hyperparameters ($\gamma$ and $\lambda$ in their Section 3.1) which tune mistake upweighting and $\ell_2$ regularization towards the original last-layer weights. Our ablation in Appendix D shows that our Section 4 results are robust to split ratio and we recommend on Line 242 that practitioners **tune no hyperparameters at all**; therefore, our results are more practical because we do not require group annotations in the validation set. We remark that our Section 4.1 and Section 5 are entirely novel and no corresponding results appear in [3]. We have added a citation and discussion of [3] to Section 2.
[1] Koh et al. “WILDS: A Benchmark of in-the-Wild Distribution Shifts.” ICML 2021.
[2] Borkan et al. “Nuanced metrics for measuring unintended bias with real data for text classification.” WWW 2019.
[3] Qiu et al. “Simple and Fast Group Robustness by Automatic Feature Reweighting.” ICML 2023.
---
Rebuttal Comment 1.1:
Title: Thank you for the rebuttal
Comment: Dear authors, thank you for the rebuttal and clarifications. I am mostly satisfied with the response. For the experiment on the dependence on the number of labeled validation points, it would be great to see a comparison with DFR specifically, to make sure that the proposed method improves label efficiency compared to DFR. That being said, I believe this paper provides a nice contribution, and I vote for acceptance.
---
Reply to Comment 1.1.1:
Title: Requested experiments
Comment: Thank you for your feedback and continued engagement. The requested experiments finished during the discussion period, so we have included them below (except for CivilComments, which is still running). The results show that ES disagreement SELF has comparable or better group annotation efficiency than DFR, particularly at 1-2% of data. Compared to Figure 5 in Qiu et al, our SELF method displays similar scaling behavior to AFR.
***Waterbirds***
| Method/Group Annotations | 1% | 2% | 5% | 10% | 20% | 50% | 100% |
|--------------------------|---------------|---------------|--------------|---------------|--------------|--------------|--------------|
| DFR (our impl.) | 25.5 +/- 41.3 | 48.3 +/- 17.4 | 75.6 +/- 8.2 | 83.0 +/- 5.1 | 89.6 +/- 1.0 | 89.9 +/- 2.8 | 90.3 +/- 1.1 |
| ES Disagreement SELF | 92.4 +/- 0.4 | 88.3 +/- 6.8 | 92.0 +/- 0.9 | 85.9 +/- 11.4 | 92.4 +/- 0.4 | 90.6 +/- 2.6 | 92.4 +/- 0.4 |
***CelebA***
| Method/Group Annotations | 1% | 2% | 5% | 10% | 20% | 50% | 100% |
|--------------------------|---------------|---------------|---------------|--------------|--------------|--------------|--------------|
| DFR (our impl.) | 67.0 +/- 21.6 | 76.6 +/- 12.5 | 79.3 +/- 10.0 | 81.3 +/- 7.7 | 81.1 +/- 2.4 | 80.9 +/- 2.2 | 83.7 +/- 2.3 |
| ES Disagreement SELF | 76.6 +/- 9.4 | 76.2 +/- 11.4 | 82.3 +/- 5.2 | 77.1 +/- 5.6 | 81.6 +/- 4.2 | 79.9 +/- 4.4 | 81.6 +/- 4.2 |
***MultiNLI***
| Method/Group Annotations | 1% | 2% | 5% | 10% | 20% | 50% | 100% |
|--------------------------|--------------|--------------|--------------|--------------|--------------|--------------|--------------|
| DFR (our impl.) | 63.8 +/- 5.7 | 67.5 +/- 1.5 | 68.8 +/- 2.2 | 68.8 +/- 1.3 | 70.0 +/- 1.3 | 70.1 +/- 1.1 | 71.0 +/- 0.7 |
| ES Disagreement SELF | 68.1 +/- 1.4 | 66.1 +/- 4.7 | 68.1 +/- 1.4 | 68.1 +/- 1.4 | 67.4 +/- 2.4 | 67.4 +/- 2.4 | 68.1 +/- 1.4 | | Summary: This work tackles an important problem of preventing the reliance of neural networks on spurious correlations. It builds on top of the work [1] primarily by using last layer re-training on class balanced held out dataset without the need for group annotations. They also additionally propose a simple but effective method SELF wherein the samples in disagreement with regularized models in the prediction with class annotations can be used for re-training the last layer. This method is particularly useful when group imbalance is large.
Results are shown on Waterbirds, CelebA, MultiNLI and CivilComments datasets and performance is comparable to DFR [1].
Strengths: (1) The paper is well written and does very good analysis with ablation studies.
(2) Makes an important observation that it is ok if re-weighting dataset has small proportion of worst group data. This proves that the class balancing is more important than the group balancing.
(3) The paper addresses some of the practical challenges (in terms of computation and training overhead) by eliminating need to annotated group labels which is expensive, need to train twice, need to train the original ERM on class-balanced dataset.
(4)The provided solutions are simple, and will not introduce any implementation or annotation overhead for already trained large scale models.
The paper addresses the limitation of accuracy gap when the group imbalance is high and proposes a solution for the same with SELF technique.
Weaknesses: (1)The observation of class-balancing importance and re-evaluation of group balancing’s role in spurious correlation is a valuable contribution. However, solution in itself lacks novelty as it is not proposing a new algorithm per se. It seems very similar to [1] "Last layer re-training is sufficient for robustness to spurious correlations" with lesser constraints in terms of re-training dataset.
(2) This particular line under section 5 “In addition to the balance of the reweighting, dataset, it is likely that characteristics of the specific data selected also contribute to SELF results. “ is unclear. Can you provide some qualitative examples from Celeb A where drop-out SELF disagreement does better than misclassification. More insights on what aspects of data makes the dropout SELF better would be good. Similarly, more qualitative example on the “uncertain data points” selected by the SELF-disagreement would provide more valuable insights. A lot of the observations are not supported by reasoning.
(3) Since the key claim is the advantage of not needing group labels annotations - ablation studies on what is the overhead caused for obtaining group annotations (probably annotation time) will help quantify the effectiveness of the proposed method.
Similarly ablation studies on the training compute time / mem consumotion between [1] and your method can help quantify the gains if any in training ERM on 95% split and using a MLP instead of Logistic regression.
References:
[1] "Last layer re-training is sufficient for robustness to spurious correlations"
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Covered in weaknesses.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: A lot of the questions posed by the paper are still not addressed or analyzed . For e.g., It would be great to see more intuitive and theoretical reasoning on fundamental observations like: why last layer re-training and retraining on held out small class balanced data is improving the group robustness. Though experimental evidence has been provided, further analysis on the theoretical reasoning behind this would make a significant contribution in this subject.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We graciously thank Reviewer KCny for their insightful comments and thoughtful suggestions. Below, we provide responses to each of the reviewer’s points.
(Weakness 1) We would like to clarify that we do not claim algorithmic novelty for the results in Section 4, only for our disagreement SELF algorithm in Section 5. We believe that our contributions in Section 4 instead fall in the category of foundational empirical insight; our experiments reveal previously unknown phenomena which are surprising in the context of previous literature and have important ramifications for future algorithm design. In particular, our Section 4 results call into question **why** DFR improves worst-group accuracy, as we show that perfect group balance is not needed to achieve good performance, and that last-layer retraining on the training distribution can be surprisingly effective. We hope these insights will motivate future theoretical work to understand the underlying foundations of spurious correlations, as well as motivate future empirical work that will not only push the boundaries of SOTA worst-group accuracy, but also provide simpler and more interpretable algorithms.
With that said, our disagreement SELF algorithm (Section 5) is entirely novel and differs from previous work [2, 3] in several important ways (besides not requiring group annotations). First, SELF finetunes the last layer instead of retraining as in [2]. Second, we show that tuning the $\ell_1$ regularization as in [2] is unnecessary and good performance can be achieved by fixed $\ell_2$ regularization, removing a hyperparameter. Third, we show that the best performance is achieved when using disagreements instead of misclassifications as in [3]. Fourth, while [3] assumes that the early-stopped model has low worst-group accuracy (causing misclassifications), SELF works even when the early-stopped model has higher worst-group accuracy than the convergent model (this is the case on CivilComments).
(Weakness 2) We have included qualitative examples for misclassification SELF, dropout disagreement SELF, and early-stop disagreement SELF as Rebuttal Figure 2. With that said, we caution against drawing conclusions from a handful of qualitative examples. Therefore, we appeal to mathematical theory to justify why disagreement works well, and we recently proved a relevant theorem described in the (Limitations) section below. Likewise, we believe our observations on the distinction between misclassification and disagreement points are well-supported by previous study in the theory literature, including the references we cite on Lines 310-313.
(Weakness 3) While our methods remove the need to annotate thousands of examples, we believe our most significant contribution is in enabling new capabilities (e.g., in settings with strict privacy or fairness constraints, disallowing or reducing the very presence of annotations) rather than accelerating existing annotation pipelines. For example, racial and gender identity are among the most important and socially relevant spurious features, but these attributes are sensitive and often cannot be collected in the first place (making annotation time irrelevant). One could imagine a scenario where a limited number of users may voluntarily self-identify, but too few to run DRO or even DFR; in that case, annotation time is negligible and the labeled samples may be used for model selection in our SELF algorithm. The reviewer may also be interested to see the results of an ablation on the validation set size, which we included as Rebuttal Figure 1.
We did not perform a detailed ablation study on the time/memory consumption of SGD vs logistic regression since the compute required is negligible on modern hardware regardless of implementation (roughly 2-4MiB of RAM and 10-20 minutes of training for one SELF instance on one Nvidia V100 GPU). With that said, the previous logistic regression version must pre-compute and save to disk the feature embeddings for the entire dataset, which can take up several GB and can be slow depending on disk I/O speed (e.g., if using an HDD rather than an SSD).
(Limitations) We agree wholeheartedly with the reviewer that theoretical justification for the empirical phenomena identified in this paper is interesting and important. To this end, we have been considering multiple theoretical questions since the submission. (1) As the reviewer suggested, an intriguing question is why class-balanced last-layer retraining improves worst-group accuracy without group annotations. We have considered several possible avenues here, including whether last-layer retraining has a sparsity-inducing implicit regularization effect and whether it uses the held-out data to learn a non-degenerate classifier after the training data experiences neural collapse [1]. With that said, we think this question is interesting and difficult enough to be a separate paper of its own. (2) We are also interested in theoretical justification for the disagreement-based upsampling of minority groups phenomenon observed in Figure 3. On this front, we recently proved a result which shows that model disagreement provably upsamples minority group points. In particular, for an overparameterized linear regression setting with frozen features and four groups, we show that the KL divergence between the regularized and convergent models is always higher for minority group points than majority group points.
[1] Papyan et al. “Prevalence of Neural Collapse during the terminal phase of deep learning training.” PNAS 117 (40) 24652-24663.
[2] Kirichenko et al. “Last Layer Retraining is Sufficient for Robustness to Spurious Correlations.” ICLR 2023.
[3] Liu et al. “Just Train Twice: Improving Group Robustness without Training Group Information.” ICML 2021. | Rebuttal 1:
Rebuttal: Please see the attached PDF file for additional figures and tables.
Pdf: /pdf/d0dce44ffda8c2d1993882f3bf41e28661461cb5.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
SpatialRank: Urban Event Ranking with NDCG Optimization on Spatiotemporal Data | Accept (poster) | Summary: This paper casts event forecasting as location ranking problem. They propose a spatial event ranking approach called SpatialRank. SpatialRank optimizes the NDCG metric while taking spatiotemporal autocorrelation into account.
Spatialrank uses a graph convolutional network to encode autocorrelated input and applies NDCG optimization on the top fully-connected layer. Based on the experiments on 3 datasets, it outperforms the underlying approaches on precision@k metric.
Strengths: This paper is a good application of graph convolutional networks to autocorrelated spatiotemporal data to solve the event ranking problem. None of the individual pieces are unique, but the compound technique is a novel application.
* Introduction of L-NDCG is novel as it addresses the problem of hotspots dominating the overall NDCG metric.
* It is also nice that the authors performed an ablation study -- albeit being rather minimal.
* Ideas are fairly clear and explained well.
* The work might be significant if its applicability, which is teased in Section 6, is discussed in more detail.
Weaknesses: A major weakness of the paper is its presentation. It needs attention to detail as well as how it is presented overall before being ready to be published. Some of the presentation issues are as follows:
* Lines 41-61 is basically a Related Work section. However, the reader is greeted with an actual Related Work section on the same page where most of the information is repeated.
* Line 151. What's Adam-style optimizer? Do you mean to say gradient-based optimizers?
* The main contribution of the paper is illustrated in Figure 1 of the Appendix. It deserves to be in the actual paper.
* Iowa dataset needs to be in the paper and the existing tables should be squeezed as they don't carry too much information to deserve that much space.
* The paper occasionally leads the reader to assume that the actual task is to increase the NDCG score although the problem is to predict top k events.
* A few spelling, capitalization, or other errors:
* Line 86: Related Works -> Related Work
* Line 182: Relu -> ReLU
* Algorithm 1: Normorlize -> normalize
* Line 280: We -> we
* Line 281: Convlstm -> ConvLSTM
* K@30 -- can't really tell this means. It is maybe K=30?
One other main weakness of the paper is its over-reliance on NDCG. First of all, experiments should't compare the approaches based on NDCG as it is the metric this approach is optimizing for as it wouldn't make a fair comparison per Goodhart’s law. Secondly, it is stated that "NDCG is not a perfect metric for our problems as it neither measures the local ranking quality nor considers spatial autocorrelation among locations." However, NDCG is still treated as somewhat ground truth throughout the paper.
Once the problem is converted into NDCG optimization, other approaches can also be utilized. For the experiments, tree-based ranking models can also be evaluated to replace the final FC layer as they have the potential to perform better.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: * What are the quantitative measures of performance wrt baselines?
* Do subregions need to be given?
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: Authors have addressed a few limitations including performance, data sparsity. To stress-test the approach the authors are advised to test their method on more variety of datasets. One can't confidently enumerate all limitations before it is tested on many datasets.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear reviewer VBY2,
Thank you very much for your comments! Below we answer your questions and address your comments.
Q: The reader is greeted with an actual Related Work section on the same page where most of the information is repeated.
A: Thank you for your suggestions! We will rephrase the related work section and reduce the repeated information in revision.
***
Q: What's Adam-style optimizer? Do you mean to say gradient-based optimizers?
A: Sorry for the confusion. We mean the Adam optimizer [1], which is an extension to stochastic gradient descent optimizer. Adam optimizer computes individual adaptive learning rates for different parameters from estimates of the first and second moments of the gradients. We found that Adam optimizer performs better in our datasets, so we use this term in the paper. Theoretically, equation 3 can be optimized by any gradient-based optimizer. To be accurate, we will use ‘gradient-based optimizers’ in our updated paper.
***
Q: Figure 1 deserves to be in the actual paper.
A: Thank you for the suggestion. Figure 1 will be moved to Section 3 in the revision.
***
Q: Iowa dataset needs to be in the paper and the existing tables should be squeezed.
A: Thank you for the suggestion. Given this cross-domain spatiotemporal ranking problem, we feel the need to use multiple baselines, metrics, and analysis methods from both learning-to-rank and spatiotemporal event forecasting domains to guarantee the completeness and consistency of the experiments. We agree with you that including results from a different geographic area will make the conclusion of the paper stronger. We will rearrange the materials to fit the results of the Iowa data. **Please check the attached pdf in the overall rebuttal**
***
Q: The paper occasionally leads the reader to assume that the actual task is to increase the NDCG score although the problem is to predict top k events.
A: To clarify, the goal is to predict the top k events with the highest risk scores. We use a hybrid NDCG score as the objective function to evaluate the ranking quality, where optimizing (maximizing) the hybrid NDCG will generally lead to a model that can better rank the top events. The traditional NDCG score is part of the objective function but simply maximizing itself might not give the best ranking results due to spatial autocorrelation of the data. This is the motivation for us to propose the hybrid NDCG loss with the local NDCG term and the new algorithm.
***
Q: A few spelling, capitalization, or other errors.
A: Thank you for pointing them out! We will correct all issues in the revision.
***
Q: K@30? maybe K=30?
A: Yes, it is correct. For example, NDCG K@30 means only top-30 ranked locations contribute to the final NDCG score. Details can be found in Experiments Section.
***
Q: One other main weakness of the paper is its over-reliance on NDCG. experiments should't compare the approaches based on NDCG as it is the metric this approach is optimizing for as it wouldn't make a fair comparison per Goodhart’s law.
A: Firstly, NDCG is only one of the three metrics used in our experiments. We also compared top-k precision and local NDCG. NDCG and top-k precision are the most widely accepted metrics in learning to rank problems. Secondly, our method does not directly optimize any of the individual metrics. It indeed optimizes a hybrid NDCG, which is the combination of NDCG and L-NDCG. By contrast, some of the SOTA baselines (such as SONG and ApproxNDCG) directly optimize NDCG as their objective. Therefore, the comparison is actually unfair to our model. However, SpatialRank still achieves superior performance in all the three metrics, including NDCG, compared to these methods. Lastly, the motivation of ranking locations based on importance is to meet the real-world demand that deploying limited law enforcement resources to the most needed places. Existing methods fail to leverage the foremost important locations and thus perform poorly in terms of ranking quality.
***
Q: It is stated that "NDCG is not a perfect metric for our problems as it neither measures the local ranking quality nor considers spatial autocorrelation among locations." However, NDCG is still treated as somewhat ground truth throughout the pape?
A: We believe NDCG is a reasonable measurement in this problem. Though we believe the proposed L-NDCG is more suitable for this problem, it is more convincing to also include existing widely accepted ranking measures such as NDCG and top-k precision. We are not using it as a ground truth. We use it as a building block of our new hybrid loss function, which addresses the limitations of the traditional NDCG measure.
***
Q: Do subregions need to be given?
A: A subregion is defined as the spatial neighborhood of each location including itself. Users can define subregions based on their knowledge or common assumptions. In this paper we use a common choice of 3x3 grid cells around each location.
***
Q: What are the quantitative measures of performance wrt baselines?
A: We use four metrics to measure the performance of baselines and proposed method including NDCG, L-NDCG, top-K precision, and cross-k function:
* NDCG is a metric of ranking quality or the relevance of foremost important items, and a higher score indicates better ranking quality.
* L-NDCG is proposed in our paper and designed to measure spatially local ranking quality over every sub-region of the study area. Essentially, L-NDCG is the average of NDCG scores for all subsets of locations.
* Top-k precision is a widely accepted measurement in learning-to-rank problems. In our cases, it is equal to the number of locations in top-k recommendations where events occurred divided by the number of recommendations k
* The Cross-K function measures the spatial correlation between the predicted locations and true locations. Details can be found in Section 4.4
***
[1] Kingma, et al.. Adam: A Method for Stochastic Optimization. 2014
---
Rebuttal Comment 1.1:
Title: Recognition of authors' rebuttal
Comment: Thank you for providing responses to my questions.
* Apologies for not being clear with my question about "What are the quantitative measures of performance wrt baselines?" I meant the runtime performance. Roughly, how long does it take to train and do inference compared to others?
* I agree with the rationale for using NDCG and coming up with L-NDCG, but not convinced by "Secondly, our method does not directly optimize any of the individual metrics. It indeed optimizes a hybrid NDCG, which is the combination of NDCG and L-NDCG." It still optimizes for NDCG and/or L-NDCG directly (modulo surrogate versions) -- making a linear combination of the two does not invalidate the fact that model optimizes the individual terms. Consider a loss function with cross entropy loss on labels and L2 weights as a regularization term. Final loss is $CE + W^2$. It doesn't change the fact that the model is optimizing on the labels.
* For my second point about NDCG, the paper starts with "The problem of urban event ranking aims at predicting the top-k most risky locations of future events such as traffic accidents and crimes." It is argued in the paper that NDCG is not a good measure for this problem ("NDCG is not a perfect metric for our problems ..."). Moreover, NCDG is one of the three metrics in the final evaluation. These points don't make a consistent narrative as NDCG is not necessitated by the underlying problem but imposed by the authors to begin with.
* Sorry for insisting on this nitpicking comment: The table should have NDCG@K, Prec@K, and L-NDCG@K and column group headings should be K=30, K=40, K=50.
---
Reply to Comment 1.1.1:
Comment:
Dear reviewer VBY2
**Q**: Apologies for not being clear with my question about "What are the quantitative measures of performance wrt baselines?" I meant the runtime performance. Roughly, how long does it take to train and do inference compared to others?
**A**: Thank you for the suggestion! We have added comparisons with SOTA methods on average training time in seconds per epoch and inference time on testing dataset. The results are copied below and will be added to the revision. The Chicago crime dataset and the Chicago accident dataset have the same input feature; thus, have equivalent training costs. In summary, SpatialRank trains faster than two SOTA baselines HintNet and GSNet on both datasets. It is only slower than HeteroConvLSTM but the training times of the two are on the same order of magnitude. The training phase of SpatialRank is slow because of computing nested L-NDCG loss function. Without extra cost on proposed optimization techniques, SpatialRank is significantly faster than all baselines in the inference phase. Given the improvement in prediction performance, we believe the cost of training time is acceptable, which will not affect the predicting efficiency of the proposed method.
| Training Time| SpatialRank | HintNet | HeteroConvLSTM|GSNet|
| ----------- | ----------- |----------- | ----------- | ----------- |
| Chicago | 88.2 | 132.1 | 47.7 | 98.8 |
| Iowa | 76.5 | 117.5 | 41.6 | 83.5 |
| Inference Time | SpatialRank | HintNet | HeteroConvLSTM|GSNet |
| ----------- | ----------- |----------- | ----------- | ----------- |
| Chicago| 5.6|51.2|14.4|21.1|
| Iowa| 5.1|41.7| 12.3| 16.2|
**Q**: I agree with the rationale for using NDCG and coming up with L-NDCG, but not convinced by "Secondly, our method does not directly optimize any of the individual metrics. It indeed optimizes a hybrid NDCG, which is the combination of NDCG and L-NDCG." It still optimizes for NDCG and/or L-NDCG directly (modulo surrogate versions) -- making a linear combination of the two does not invalidate the fact that model optimizes the individual terms. Consider a loss function with cross-entropy loss on labels and L2 weights as a regularization term. Final loss is $CE + W^2$. It doesn't change the fact that the model is optimizing on the labels.
**A**: We want to thank you for your feedback on our response. We are happy that you agree with the rationale of using NDCG and L-NDCG in this problem! Apologize for the confusing words used in our rebuttal. Yes, we agree with your comments that our proposed model optimizes for a linear combination of NDCG and/or L-NDCG directly. We want to clarify that the used term “not directly” in rebuttal means we don’t solely consider one of them but both of them at the same time. Some related work (e.g., SONG and ApproxNDCG) use NDCG as the sole objective to optimize. But our method still achieves a better NDCG compared to them. This means our method is more suitable than SOTA methods for solving our problem. The used term ‘not directly’ is ambiguous to some extent. Fortunately, we don’t use this term to describe the proposed method in our paper. To be precise, we will emphasize that our model optimizes a linear combination of NDCG and L-NDCG in the revision. Thanks again for your suggestions!
**Q**: For my second point about NDCG, the paper starts with "The problem of urban event ranking aims at predicting the top-k most risky locations of future events such as traffic accidents and crimes." It is argued in the paper that NDCG is not a good measure for this problem ("NDCG is not a perfect metric for our problems ..."). Moreover, NCDG is one of the three metrics in the final evaluation. These points don't make a consistent narrative as NDCG is not necessitated by the underlying problem but imposed by the authors to begin with.
**A** Sorry for the confusion. We will clarify this sentence in our revision. We actually meant ‘NDCG is not a perfect objective function in our problem’ (e.g., used by those SOTA methods), but it is still a reasonable metric in our experiments. NDCG prioritizes the foremost important items in the ranking, which generally meets the need of solving our problem. To address the unique challenges of our problem, we propose the hybrid objective function which not only prioritizes foremost significant locations but also considers local rankings between neighbors. This is the novelty over existing NDCG measurement. However, as this paper tries to bridge the gap between urban event ranking problems and existing learning-to-rank problems such as recommendation systems, we believe it is necessary to consider this widely accepted ranking metric in our experiment.
**Q**: Sorry for insisting on this nitpicking comment: The table should have NDCG@K, Prec@K, and L-NDCG@K and column group headings should be K=30, K=40, K=50.
**A** Thank you for your suggestions. We will update those notations in our revision.
Thanks again for your time reviewing our paper!! | Summary: The paper proposes a method which applies learning to rank losses to spatiotemporal event prediction. The observed data is considered over a discrete spatial partitioning and considers a time series of feature information. Features are grouped into purely temporal, purely spatial (not-time dependent) and spatiotemporal information. In addition, there is a risk score >0 for time, location pairs where an event occurred. The examined task is now to predict the top-k riskiest location for the next time step.
The method encodes the state of the network using a know spatiotemporal prediction network which employs graph NN layers to encode the underlying spatial connectedness modelling a road network.
The paper's novelty is to learn a dynamic adjacency matrix from the spatiotemporal feature of each location, which allows us to dynamically model the current conditions, such as traffic. This dynamic adjacency matrix is combined with the static one.
A second novelty is adapting the ranking score to consider local rankings, which replace the global rank of location the original NDCG with a ranking score within a local neighbourhood. The localized NDCG is also combined with global NDCG loss.
Fot training, the authors propose to compute a weight for each element which is based on the ranking score of the prediction error.
Strengths: The paper proposes a novel idea to compute a ranking of locations instead of directly computing an event probability or even the next event, which might be hard to measure. In other words, instead of predicting the event likelihood, a ranking is learnt, which should be easier to learn but sufficient for various tasks.
Though the general layout of the proposed method is to combine NDCG with a spatial prediction backbone, the paper adds three modifications and shows that they considerably improve the performance compared to baselines.
The paper is good to follow and provides a reproducible description of the proposed methods.
The experiments show improved results von two real-world data sets, and the authors provide an ablation study.
The paper provided sufficient supplementary material to check all the details.
Weaknesses: Though I could follow the reasoning of computing a top-k query, the motivation why this information is enough in several applications could be motivated better in the introduction.
The ablation study seems only to compare various values of \delta. But it would also be interesting to see the impact of the other contributions. As the technical contribution seems to consist of three relatively independent adaptions to the spatiotemporal setting seeing how much each contribution added to the improvement might be very insightful.
The description of the dynamic adjacency matrix is a little bit cryptic. In particular, it is unclear how it is trained and why it should represent a dynamic adaption of the static adjacency matrix
Technical Quality: 4 excellent
Clarity: 3 good
Questions for Authors: What is the intuition of subnetwork for the dynamic adjacency matrix, and what is it supposed to model? Can you give more details about the intuition of the architecture and how it naturally aligns with the static adjacency matrix?
(There should not be a link if there is no connection in the static matrix, right?).
Can you give details on how much the instance weighting in training helped to improve the performance or speed up the conference?
Similarly, is there an ablation study on the dynamic adjacency matrix?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 3 good
Contribution: 3 good
Limitations: The paper addresses its limitations in a dedicated paragraph.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear reviewer JmsG,
Thank you very much for your comments and appreciation of our paper! Below we address your questions and concerns.
Q: Though I could follow the reasoning of computing a top-k query, the motivation why this information is enough in several applications could be motivated better in the introduction.
A: Thanks for your suggestions! We will add extra explanations and citations on our motivation for formulating a ranking problem in the introduction. We believe being able to correctly rank the foremost important locations meets the real-world demand. Given limited law enforcement resources such as staffing, the most essential task is to ensure they are allocated to the riskiest areas. The value k can be adjusted by users and set larger to report more locations. Our case study in Figure 4 demonstrates that our proposed method can capture more hotspots than baselines who make predictions for all the locations.
**Please also see more details in response #2 in overall rebuttal.**
***
Q: The ablation study seems only to compare various values of \delta. But it would also be interesting to see the impact of the other contributions.
A: Sorry for the confusion. Due to the page limit and massive experiment results, instead of having independent tables, we concatenate some of ablation studies into performance comparison and optimization comparison part. To study how adaptive convolution layer improves the performance, we set $\beta = 0.5$ in SpatialRank marked as SpatialRank* in performance comparison part in Table 1, so that ratio $\beta$ between static adjacency matrix and dynamic matrix is fixed. To study how our proposed optimization solution contributes to the performance, we compare our approach with other SOTA solutions on the same network architecture in the optimization comparison part in Table 2. All the results show that each of the three contributions of the paper plays an important role in the superior performance of SpatialRank.
***
Q: It is unclear how it is trained and why it should represent a dynamic adaption of the static adjacency matrix. What is the intuition of subnetwork for the dynamic adjacency matrix, and what is it supposed to model? Can you give more details about the intuition of the architecture and how it naturally aligns with the static adjacency matrix? (There should not be a link if there is no connection in the static matrix, right?).
A: Dynamic adjacency matrix is learned from computing pairwise similarity between source node embeddings and target node embeddings. If nodes’ information is not available, $E_1$ and $E_2$ are randomly initialized node embeddings to be learned during training, we treat nodes’ $F_{ST}$ as embeddings in our case. Parameter $W_1$ and $W_2$ are learnable parameters. Intuitively, the static adjacency matrix indicates a baseline correlation between different locations. For example, event patterns in a residential area might be constantly correlated with patterns in a shopping center nearby. Static adjacency matrix is pre-computed before training. This is also what most of the related work has been done. However, inspired by many observations from related studies [1], it is evident that we think this correlation is not always constant. Therefore, we use locations’ time-variant features to construct a new adjacency matrix, and this dynamic adjacency matrix varies with time. We learn the parameters in this dynamic matrix during the network training process. Finally, we combine the static and the learned dynamic matrices through a learned weight (\beta). In this way, a combined adjacency matrix can be treated as adaptation from a static adjacency matrix considering the influence of other features during different periods. There should not be a link between two nodes in static matrix if the Pearson correlation coefficient between them is zero. The link between locations of the final adjacency matrix is determined by both the static and the dynamic graphs.
***
Q: Similarly, is there an ablation study on the dynamic adjacency matrix?
A: As we answered in the second question, there are ablation studies on the learned combined adjacency matrix versus fixed adjacency matrix in table 1. Results show that introducing the dynamic adjacency matrix can effectively improve the performance of the model (last two rows in Table 1). There could be other ways to construct the dynamic adjacency matrix, but we did not perform additional evaluations on these choices as this is not our main focus of the paper.
***
Q: Can you give details on how much the instance weighting in training helped to improve the performance or speed up the conference?
A: Thanks for pointing this out. We will add an extra ablation study to the camera-ready paper and copied below. We compare SpatialRank to the same model with equal weights. The results show that instance weighting constantly improves the prediction performance.
| Dataset | methods| NDCG@30 | PREC@30 | L-NDCG@30 | NDCG@40 | PREC@40 | L-NDCG@40 | NDCG@50 | PREC@50 | L-NDCG@50 |
| ----------- | ----------- | ----------- |----------- | ----------- | ----------- |----------- | ----------- |----------- |----------- | ----------- |
| Chicago Accident | equal-weights | .255 | .441 | .622 | .265 | .417 | .613 |.274 | .401 | .595 |
| Chicago Accident | SpatialRank | .257 | .444 | .621 | .268 | .420 | .614 |.278 | .403 | .599 |
| Chicago Crime | equal -weights | .373 | .480 | .660 | .379 | .466 | .456 |.390 | .450 | .649 |
| Chicago Crime | SpatialRank | .373 | .491 | .665 | .380 | .467 | .647 |.392 | .446 | .644 |
| Iowa | equal -weights | .531 | .304 | .617 | .557 | .264 | .591 |.573 | .225 | .546 |
| Iowa | SpatialRank | .540 | .309 | .618 | .563 | .268 | .600 |.585 | .232 | .550 |
***
[1] Carey et al “Impact of Daylight Saving Time on Road Traffic Collision Risk: a Systematic Review.”
---
Rebuttal Comment 1.1:
Title: recognition of author rebuttal
Comment: Thank you for carefully addressing the brought-up points about your submission. I also appreciate the extra effort in providing the additional experimental results.
---
Reply to Comment 1.1.1:
Comment: Reviewer JmsG,
we appreciate your feedback and suggestion to our paper. We are glad the paper was improved.
Thank you very much, Reviewer JmsG! | Summary: The paper proposes a deep learning model called SpatialRank that predicts the top-k riskiest locations of future events such as traffic accidents and crimes by optimizing a spatial version of the NDCG measure. The model features adaptive graph convolution layers that learn the spatiotemporal dependencies from data, a hybrid loss function that balances global and local ranking quality, and an importance-sampling with spatial filtering algorithm that guides the model to focus on important locations. The model is evaluated on three real-world datasets from Chicago and Iowa, and outperforms several methods in terms of NDCG, L-NDCG, and precision. The model also demonstrates better spatial correlation with ground truth using the cross-K function.
Strengths: S1 The proposed method is evaluated on three different real-life datasets.
S2 The idea of introducing the NDCG metric into a location-based learning problem is somewhat interesting.
Weaknesses: Though this paper arises an interesting problem, I have the following major concerns about this paper:
(1) The motivation for urban event ranking is weak. The paper does not highlight the significance and novelty of this problem, and why it is more important than making event predictions for each location. The paper should provide more evidence or examples to motivate the need and value of ranking locations for future events.
(2) The technical contribution is limited and the model is not novel enough. The paper does not clearly state how the proposed model differs from existing methods in terms of problem formulation, model design, optimization strategy, or evaluation metric. The designed NDCG loss function is quite similar to Eq. (3). I do not think there is much difference between these two functions. The paper should provide more details and analysis to demonstrate the advantages and challenges of the proposed approach.
(3) The model uses a Euclidean distance to define the neighborhood of each location, which may not reflect the actual spatial proximity or connectivity of locations in terms of road network or travel distance.
(4) The presentation and description of the framework is poor and incomplete. There are several unclear or confusing points in the paper, please refer to the above Questions.
(5) The experiments are not sufficient. There are several limitations or missing details in the experimental section, such as:
(5.1) No significance test to show whether the differences between SpatialRank and baselines are statistically significant or not.
(5.2) No ablation study experiments to show the effectiveness or necessity of each component of SpatialRank, such as the dynamic graph generation module or the designed NDCG loss function.
- Minor concerns:
(6) There are some grammar issues in the paper, such as “learning-to-ranking” in Line 51, which should be corrected.
(7) The paper lacks a framework figure to illustrate the overall architecture and workflow of the model, which makes it hard to follow and understand.
Technical Quality: 2 fair
Clarity: 1 poor
Questions for Authors: Q1. Directly constructing from temporal events is more effective and explicit. Why generating a dynamic graph from features is necessary or effective? How does it capture the dynamic spatiotemporal dependencies among locations? How does it compare with directly constructing a graph from temporal events?
Q2. Why the method set E1 = E2 = FST in Eq. (4) and Eq. (5)? Since these two equations are almost same, why combine the two embedding Z_1 and Z_2 in Eq. (6) for generating the dynamic adjacency matrix?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 1 poor
Contribution: 2 fair
Limitations: The authors have discussed the limitations of their work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear reviewer yV4P
Thank you very much for your comments! Please find our responses below:
Q: The motivation for urban event ranking is weak. The paper should provide more evidence or examples to motivate the need and value of ranking locations for future events.
A: We believe being able to correctly rank the foremost important locations meets the real-world demand because many related works and reports have shown that deploying limited staffing resources to the most needed places is a crucial task for law enforcements. Case studies also show that our method can find the riskiest locations and capture more hotspots than baselines who predict for all locations. **We provide more details in response #2 in the overall rebuttal.**
* * *
Q: The technical contribution is limited and the model is not novel enough. The paper should provide more details and analysis to demonstrate the advantages and challenges of the proposed approach.
A: We believe this paper presents important, novel, and valid contributions.
First, this is the first paper to formulate an urban event forecasting problem as a spatial learning-to-rank problem and solve it by directly optimizing a spatial version of the NDCG measure.
Second, we propose a novel local ranking measurement named L-NDCG and integrate it into our new loss function. This is to the best of our knowledge the first NDCG-based measure that considers spatial autocorrelation of data. The new loss **is different from the original NDCG** in not only an additional local ranking quality term, but also the underlying scientific assumptions and the non-trivial computational techniques needed to efficiently evaluate it.
Third, we propose a novel importance-based location sampling algorithm to efficiently train the model to optimize the hybrid NDCG loss function. **The algorithm is also very different from traditional NDCG optimization techniques** as it uses spatial sampling to address the L-NDCG part of the loss function for the first time.
In addition, we propose a network architecture with a novel adaptive convolution layer to capture the dynamic correlations between locations.
Finally, we provide comprehensive experiments and evaluate the improvement gained through each of the above contributions.
**More details in response #3 of overall rebuttal.**
* * *
Q: The model uses a Euclidean distance to define the neighborhood of each location, which may not reflect the actual spatial proximity or connectivity of locations in terms of road network or travel distance.
A: We choose Euclidean distance because:
* Euclidean distance is a widely accepted distance measure in the field of spatiotemporal forecasting problems, because it is computationally efficient and generally effective [1].
* Computing actual spatial proximity in terms of if road network is computationally expensive and requires additional data, which might not be available to users.
* The difference between using Euclidean distance and using travel distance is negligible in the small local neighborhood we consider. We computed the travel distance between the centroids of grid cells using road information, and found that the Euclidean distance is proportionally equivalent to the travel distance under current partition granularity in our Chicago and Iowa datasets. Even using Euclidean distance our model is already superior to SOTA methods.
* * *
Q: There are some grammar issues in the paper.
A: Thank you for pointing this out. We will correct grammar issues.
* * *
Q: The paper lacks a framework figure to illustrate the overall architecture and workflow of the model, which makes it hard to follow and understand.
A: Due to the limited paper space, the framework figure and extra experiment results were in supplementary material. We will improve the presentation in the revision.
* * *
Q: Why the method set E1 = E2 = FST in Eq. (4) and Eq. (5)? Why combine the two embedding Z_1 and Z_2 in Eq. (6) for generating the dynamic adjacency matrix?
A: Sorry for the confusion. If nodes’ information is not available, $E_1$ and $E_2$ are randomly initialized node embeddings to be learned during training. In our case, we take advantage of two sets of $F_{ST}$ from the source node and target node respectively to represent node embeddings $E_1$ and $E_2$, so we are calculating the pairwise similarity between source node features and target node features in Eq. 6. The subtraction and ReLU activation function in Eq. (6) lead to the asymmetric property. In other words, we treat nodes' spatiotemporal features as embeddings to reveal underlying dynamic connections between nodes. We will rephrase these equations in the revision.
* * *
Q: Directly constructing from temporal events is more effective and explicit. Why generating a dynamic graph from features is necessary or effective? How does it capture the dynamic spatiotemporal dependencies among locations? How does it compare with directly constructing a graph from temporal events?
A: Our proposed graph learning method learns graphs based on nodes’ features and dynamically updates graphs based on temporal information. As mentioned in Section 3, we use historical events to generate a static graph and learn a time-variant graph from $F_{ST}$ (e.g., traffic volume) through different periods. This design can capture the dynamic dependencies among locations in the real-world datasets. The experiment results in Table 1 further proves the benefits of considering dynamic graph versus static graphs. We are not very sure what you mean by constructing a graph from temporal events. The dataset covers a long time period (e.g., a few years). With events from each location during each day as nodes (~ millions), the graph, in particular the adjacency matrix could be too huge for any training algorithm.
* * *
[1] Zhe Jiang. 2018. A survey on spatial prediction methods. TKDE 31, 9 (2018), 1645–1664
---
Rebuttal Comment 1.1:
Comment: Dear Reviewer yV4P,
We sincerely appreciate your time reviewing our paper! We hope we have addressed all your concerns in our response. Please let us know if you have any additional questions.
---
Rebuttal Comment 1.2:
Title: Thanks for your careful response
Comment: Dear authors;
Thanks for your careful response. I have raised my score. However, I still would like to give several suggestions for this paper:
1) An overall framework is necessary to show in the main manuscript to help the readers quickly get the general idea of the paper;
2) The presentation of the paper needs to be significantly improved before publication;
3) The improvement of the proposed method compared with baselines is incremental. Therefore, a significant test is deserved to prove the improvement is not trivial.
Regards
---
Reply to Comment 1.2.1:
Comment: Thank you reviewer yV4P!
We will make the following improvements in the revision regarding your suggestions:
**Comment 1** An overall framework is necessary to show in the main manuscript to help the readers quickly get the general idea of the paper.
**A:** Thanks for the suggestion! We will move the overall framework from supplementary material to the main manuscript and summarize the general idea of our paper at the beginning of the methodology section.
**Comment 2** The presentation of the paper needs to be significantly improved before publication;
**A:** Thanks for the suggestion! We will improve the presentation of our paper in the revision.
**Comment 3** The improvement of the proposed method compared with baselines is incremental. Therefore, a significant test is deserved to prove the improvement is not trivial.
**A:** Thanks for the suggestion! We provide the average performance with standard deviation over three runs in Table 1 and Table 2 of the updated pdf file in the author’s rebuttal. Furthermore, we have conducted the student t-test for these results, and the results demonstrate that our improvements over baselines are nontrivial. The results are copied below and will be updated in the revision. In the three tables, each * indicates that the performance improvement of the proposed method over this baseline is statistically significant based on the student t-test with $\alpha = 0.05$ over three runs.
|Chicago Crime|NDCG@30|PREC@30|L-NDCG@30|NDCG@40|PREC@40|L-NDCG@40|NDCG@50|PREC@50|L-NDCG@50|
|-----------| -----------|----------- | -----------|-----------|-----------| -----------|----------- |----------- | ----------- |
| LSTM | .246+-0.001*| .327+-0.002*| .517+-0.003* | .257+-0.001* | .329+-0.002*| .527+-0.003* | .262+-0.003* |.314++0.005* | .512+-0.003*|
| ConvLSTM | .313+-0.002*| .415+-0.004*| .617+-0.004* | .325+-0.002* | .404+-0.001*| .607+-0.006* | .333+-0.002* |.387++0.004* | .599+-0.003*|
| GSNet | .283+-0.003*| .388+-0.002*| .584+-0.003* |.296+-0.002* | .374+-0.002*| .565+-0.003* | .299+-0.002* |.335++0.004* | .568+-0.003*|
| HeteroConvLSTM | .346+-0.003*| .468+-0.003*| .657+-0.004|.365+-0.001* | .452+-0.004*| .642+-0.006| .374+-0.004* |.433++0.004* | .386+-0.004*|
| HintNet| .342+-0.003*| .468+-0.003*| .661+-0.004 | .358+-0.003* | .448+-0.004*| .631+-0.006*| .369+-0.002* |.434++0.002* | .628+-0.004*|
|Chicago Accident|NDCG@30|PREC@30|L-NDCG@30|NDCG@40|PREC@40|L-NDCG@40|NDCG@50|PREC@50|L-NDCG@50|
|-----------| -----------|----------- | -----------|-----------|-----------| -----------|----------- |----------- | ----------- |
| LSTM | .215+-0.002*| .392+-0.002*| .519+-0.003* | .225+-0.002* | .380+-0.002*| .543+-0.003* | .249+-0.001* |.368++0.002* | .544+-0.003*|
| ConvLSTM | .225+-0.005*| .410+-0.004*| .553+-0.003* | .236+-0.001* | .388+-0.004*| .563+-0.008* | .252+-0.001* |.366++0.002* | .540+-0.008*|
| GSNet | .194+-0.001*| .371+-0.002*| .493+-0.005* |.201+-0.002* | .371+-0.002*| .517+-0.005* | .231+-0.001* |.337++0.004* | .499+-0.003*|
| HeteroConvLSTM | .229+-0.002*| .401+-0.001*| .557+-0.002* |.240+-0.001* | .390+-0.004*| .564+-0.003* | .225+-0.003* |.375++0.002* | .551+-0.003*|
| HintNet| .228+-0.001*| .400+-0.002*| .555+-0.003*| .238+-0.002* | .390+-0.003*| .569+-0.003*| .256+-0.001* |.373++0.004* | .561+-0.008*|
|Iowa|NDCG@30|PREC@30|L-NDCG@30|NDCG@40|PREC@40|L-NDCG@40|NDCG@50|PREC@50|L-NDCG@50|
|-----------| -----------|----------- | -----------|-----------|-----------| -----------|----------- |----------- | ----------- |
| LSTM | .503+-0.003*| .278+-0.003*| .573+-0.003* | .522+-0.001* | .209+-0.001*| .518+-0.003* | .519+-0.005* |.197++0.001* | .474+-0.001*|
| ConvLSTM | .490+-0.003*| .282+-0.002*| .583+-0.001* | .507+-0.003* | .207+-0.001*| .513+-0.004* | .511+-0.003* |.189++0.003* | .474+-0.008*|
| GSNet | .493+-0.002*| .265+-0.001*| .569+-0.003* | .509+-0.003* | .222+-0.003*| .527+-0.003* | .506+-0.005* |.207++0.001* | .510+-0.003*|
| HeteroConvLSTM | .518+-0.001*| .289+-0.002*| .617+-0.004 | .523+-0.005* | .258+-0.001*| .589+-0.005* | .543+-0.003* |.226++0.001* | .534+-0.005*|
| HintNet| .512+-0.005*| .289+-0.003*| .617+-0.001 | .542+-0.005* | .243+-0.004*| .590+-0.009 | .556+-0.003* |.209++0.002* | .534+-0.008*| | Summary: This paper investigates the problem of future urban event prediction on spatiotemporal data. This is an important problem for a broad range of urban application. Different from prior work, this paper for the first time predicts most likely future events by directly optimizing location ranking in the prediction through NDCG optimization. The authors propose a dynamic adjacency matrix of locations, a hybrid NDCG loss function and a spatial sampling algorithm to handle the unique challenges brought by spatiotemporal data. Experimental results on three datasets show that the proposed solution can beat state-of-the-art event prediction models as well as existing NDCG optimization solutions on crime and accident prediction.
Strengths: 1. The problem investigated by the paper is significant for many urban applications such as crime prediction and accident forecasting.
2. The paper is the first to use NDCG optimization on spatiotemporal data for location ranking. This is a novel idea for event prediction.
3. The work also adds value to the literature of ranking algorithm for providing solutions on how to handle non-iid spatial data ranking.
4. The experiments show that the proposed SpatialRank model outperforms not only event/accident prediction methods but also traditional NDCG optimization method. This suggests that the model is effective in addressing some of the unique challenges in spatiotemporal data.
Weaknesses: 1. As discussed by the authors, the hybrid loss function might introduce significant computation cost increase to the learning algorithm. In addition to the complexity analysis, the authors should provide additional evidence (e.g., experiments) to justify the impact to training time.
2. There are quite a lot of symbols in the paper and the authors should provide a table or summary of these symbols. Without such information, the complexity analysis part is a bit hard to follow.
3. The experimental results should be presented with error bars.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. What is the model architecture used for the optimization comparison experiment? How does the model architecture choice affect the performance of the optimizers? Is it possible that with a different network architecture the other optimizers such as SONG or approxNDCG can beat spatialRank?
2. How is top-k precision defined in the experiments?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: N/A.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear reviewer RZQE,
Thank you very much for your comments and appreciation of our paper! Below we address your questions and concerns.
Q: In addition to the complexity analysis, the authors should provide additional evidence (e.g., experiments) to justify the impact to training time.
A: Thank you for the suggestion! We have added comparisons with SOTA methods on average training time in seconds per epoch. The results are copied below and will be added to the revision. The Chicago crime dataset and the Chicago accident dataset have the same input feature; thus, have equivalent training costs. In summary, SpatialRank trains faster than two SOTA baselines HintNet and GSNet on both datasets. It is only slower than HeteroConvLSTM but the training times of the two are on the same order of magnitude. Given the improvement in prediction performance, we believe this is an acceptable cost, which will not affect the practical value of the proposed method.
| Dataset | SpatialRank | HintNet | HeteroConvLSTM | GSNet |
| ----------- | ----------- |----------- | ----------- | ----------- |
| Chicago | 88.2 | 132.1 | 47.7 | 98.8 |
| Iowa | 76.5 | 117.5 | 41.6 | 83.5 |
* * *
Q: There are quite a lot of symbols in the paper and the authors should provide a table or summary of these symbols.
A: Thank you for the suggestion. We will add a symbol table to the revision. Besides, explanations on symbols related to ranking optimization are included in Section 3.2 and Section 4.2. Symbols for problem formulation are introduced in Section 3.1.
| Symbol | Explanations |
| ----------- | ----------- |
| $S$ | Spatial filed, study area |
| $s$ | A partitioned location, grid cell|
| $T$ | Temporal filed, study period|
| $t$ | Time interval (e.g. hours, days)|
| $F_T$ | Temporal features (e.g. weather, time)|
| $F_S$ | Spatial features (e.g. POI) |
| $F_{ST}$ | Spatiotemporal features (e.g. traffic conditions) |
| $y$ | Event Risk Score|
| $Z$ | Discounted Cumulative Gain (DCG) score|
| $E$ | Node embedings|
| $a$ | Pearson correlation coefficient|
| $NDCG$ | Normalized Discounted Cumulative Gain|
| $L-NDCG$ | Local Normalized Discounted Cumulative Gain|
| $Prec$ | top-K precision|
| $r()$ | Ranking function|
| $N()$ | Neighbour querying|
* * *
Q: The experimental results should be presented with error bars
A: Thank you for pointing it out. We have revised the three experiment results and now report the average performance including NDCG, L-NDCG, and Precision with standard deviation over 3 runs in Table 1 and Table 2 of the updated pdf. The results show that SpatialRank significantly outperforms other compared baselines in three datasets. SpatialRank substantially outperforms SONG and ApproxNDCG and made a noticeable improvement on L-NDCG as it is considered in the objective function.
* * *
Q: What is the model architecture used for the optimization comparison experiment? How does the model architecture choice affect the performance of the optimizers? Is it possible that with a different network architecture the other optimizers such as SONG or approxNDCG can beat spatialRank?
A: The model architecture used for optimization comparison is described in Section 4.1, which utilizes LSTM to capture temporal dependencies and graph convolution layers to capture spatial dependencies. This is consistent with the model proposed in Section 3.1. . In the ablation study, we demonstrate that with the same network architecture but different loss functions and learning algorithms, SpatialRank can still beat other baselines (e.g., CE SONG). This proves that the improvements in performance came not solely from the network architecture proposed in 3.1 but all the three contributions. In fact, our proposed framework can improve the performance of other networks architectures as well, which is our core contribution.
* * *
Q: How is top-k precision defined in the experiments?
A: Top-k precision is a widely accepted measurement in learning-to-rank problems. In our cases, it is equal to the number of locations in top-k recommendations where events occurred divided by the number of recommendations k. It indicates how well our model can capture events within our recommendations. More explanations can be found in this paper Lu et al[1]. We will also cite this paper in the experiment section in the revision.
* * *
[1] Lu, Jing, et al. Sampling Wisely: Deep Image Embedding by Top-K Precision Optimization. IEEE, 2019
---
Rebuttal Comment 1.1:
Title: Comment
Comment: All of my concerns raised in the review have been addressed. The paper is well written with solid technical contributions and promising experiment result. I would like to increase my score.
---
Reply to Comment 1.1.1:
Comment: Thank you reviewer RZQE! | Rebuttal 1:
Rebuttal: Q: The experimental results should be presented with error bars (RZQE)
A: Thank you for pointing it out. We have revised the three experiment results and now report the average performance including NDCG, L-NDCG, and Precision with standard deviation over 3 runs in Table 1 and Table 2 of the updated pdf. The results show that SpatialRank significantly outperforms other compared baselines in three datasets. SpatialRank substantially outperforms SONG and ApproxNDCG and made a noticeable improvement on L-NDCG as it is considered in the objective function.
* * *
Q: Though I could follow the reasoning of computing a top-k query, the motivation why this information is enough in several applications could be motivated better in the introduction. (JmsG)
Q: The motivation for urban event ranking is weak. The paper does not highlight the significance and novelty of this problem, and why it is more important than making event predictions for each location. (yV4P)
A: Thanks for your suggestions! We will add extra explanations and citations on our motivation for formulating a ranking problem in the introduction of the revision. We believe being able to correctly rank the foremost important locations meets the real-world demand. Because deploying limited law enforcement resources to the most needed places is necessary. Chicago Police Department has utilized criminal intelligence analysis and data science techniques to help command staff determine where best to deploy resources [1]. According to Police Executive Research Forum, there were 42.7% more resignations among law enforcement but a 3.9% decrease in hiring new officers in 2021 compared to 2019 [2]. Meanwhile, the Federal Bureau of Investigation confirms that violent crime in 2020 has surged nearly 30% over 2019 [3]. Therefore, given such growth of crimes, deploying limited law enforcement resources to the most needed places is very necessary. In addition, our case study in Figure 4 of the supplementary material also demonstrates that our proposed method can prioritize the riskiest locations and capture more hotspots compared to baselines that make predictions for all the locations. This can potentially improve the deployment efficiency of police resources.
* * *
Q: The technical contribution is limited and the model is not novel enough. The paper should provide more details and analysis to demonstrate the advantages and challenges of the proposed approach. (yV4P)
A: In this paper, we present the following contributions in terms of problem formulation, model design, optimization strategy, and evaluation metrics.
* This is the first paper to formulate an urban event forecasting problem as a spatial learning-to-rank problem and solve it by directly optimizing a spatial version of the NDCG measure. (As opposed to Existing works solve this problem by optimizing non-ranking-based metrics such as cross entropy by related work).
* We propose a novel local ranking measurement named L-NDCG and integrate it into our new loss function. This is to the best of our knowledge the first NDCG-based measure that considers spatial autocorrelation of data. The new hybrid loss **is quite different from the original NDCG** in not only an additional local ranking term, but also the underlying scientific assumptions as well as the non-trivial computational techniques needed to efficiently evaluate it. In Section 3.2, we explained the first law of geography [7] that nearby locations tend to be similar, which makes it challenging to rank neighboring locations correctly. L-NDCG emphasizes ranking correctly on each subset of locations so that important locations can be distinguished from their nearby locations. This is an important advancement over existing work.
* We propose a novel importance-based location sampling algorithm to efficiently train the model to optimize the hybrid NDCG loss function, where we guide the model to pay more attention to important locations with higher training errors. We explain more details in Algorithm 1 in the paper. **The algorithm is also very different from traditional NDCG optimization techniques** as it uses spatial sampling to address the L-NDCG part of the loss function for the first time.
* Our proposed optimization framework can work with different deep learning architectures. We proposed a variant of the existing graph neural networks with a novel adaptive convolution layer to capture the dynamic correlations between locations. Unlike prior works such as [4], our design allows us to dynamically capture the correlations between locations. This improves the performance of the method effectively.
* We provide comprehensive experiments on our proposed approach. We evaluate the performance of the adaptive convolution layers (SpatialRank* vs SpatialRank in Table 1), the advantages of L-NDCG in the hybrid objective function (ablation study on $\sigma$ in Table 3), and the effectiveness of our spatial sampling algorithm (compared with SOTA optimization methods in Table 2). The overall results clearly show that each of the above three contributions play an important role in helping SpatialRank outperform all the SOTA baselines on three different datasets. Therefore, we believe our paper has important, novel and valid contributions.
* * *
[1] [CPD Expands Smart Policing Technology to Support Strategic Deployment and CTA Safety]( https://home.chicagopolice.org/cpd-expands-smart-policing-technology-to-support-strategic-deployment-and-cta-safety/)
[2] [PERF survey shows steady staffing decrease over the past two years]( https://www.policeforum.org/workforcemarch2022)
[3] [US murder rate continued grim climb in 2021]( https://www.foxnews.com/us/us-murder-rate-continued-grim-climb-in-2021-new-fbi-estimates-show)
[4] Zhang, Yingxue, et al. TrafficGAN: Off-Deployment Traffic Estimation with Traffic Generative Adversarial Networks. IEEE, 2019, pp. 1474–79.
Pdf: /pdf/398dba6511c22897697b7b005ad78af56a845b8f.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Easy Learning from Label Proportions | Accept (poster) | Summary: This paper focuses on the problem of learning from label proportions. It addresses the performance degradation issue of the EPRM method when the hypothesis class lacks sufficient expressiveness. To overcome this problem, the paper introduces EasyLLP as a solution. In practice, EasyLLP differs from EPRM (or PropMatching) in the calculation process. EasyLLP first calculates the corrected instance-level loss and then takes the average, whereas EPRM first takes the average of the bag-level predictions and then calculates the loss. This simplifies the implementation of the learning algorithm: just utilize a corrected loss function and approach the problem as a regression task.
Strengths: This paper studies the problem of LLP, which is an important problem to the community. The proposed method is based on a novel loss correction method, which could be useful for some related problems.
The paper points out and analyzes the limitation of EPRM. This finding is interesting and novel.
The proposition 4.2 with its proof seem to be a significant contribution. Based on proposition 4.2, the proposed EasyLLP is sound and easy to implement. Experimental results show the effectiveness of the proposed EasyLLP.
Weaknesses: There are some minor issues:
1. Literature [16] is also a method based on unbiased risk estimation, and is very related to the proposed method. How's the comparison between EasyLLP and [16]? It should be compared and discussed in the experiments.
2. In section 3 the paper discussed the limitation of EPRM. However, why EasyLLP solves this problem is unclear. Please add some discussion.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: See weaknesses part.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: n/a
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > On originality: Comparison to [16], and other papers in this literature.
* A similar point was raised by Reviewer c8Fj. Thanks for stimulating a more thorough discussion of the related literature. The paper Reviewer YhoX is mentioning is related to ours, but it is by no means subsuming our results. For instance, [16] (as well as other papers in this stream of literature), rely on two or more $U$ sets, which are assumed to be diverse in the prior mixture. It is this diversity that allows the authors to construct unbiased estimates and then derive consistency results. In our case, the bags have the same prior $p$, and we work under the assumption that we cannot handcraft diverse bags out of our samples, as the aggregation into bags is done without having access to the class conditional distributions $p(x|y=1)$ and $p(x|y=-1)$. This setting is largely motivated by practical scenarios where the learner may not have control over the way bags are generated. The difference between the two settings can also be observed in the different flavor of the consistency results. E.g., in [16] (even with $m = 2$ bags) the consistency limit has to be interpreted ``as the bag size $n_{tr} = n_1+n_2\ $ goes to infinity”. In our case, the bag size $k$ has to remain constant, and it is the number of bags that goes to infinity.
* One more thing that we also found different is that, unlike our paper, all results in these previous works, as presented, make assumptions about the loss function (e.g., proper losses like square loss or cross entropy for [16], margin-based losses for Lu et al. “On the minimal supervision” ICLR 2019 paper). This flexibility allows us to apply our debiasing procedure to any function g(x,y) of two variables, hence we can debias, e.g., also the *gradient* of a loss function, enabling the principled usage of stochastic gradient descent procedures with only label proportion information. This is something that comes for free in our approach, that we have not seen in the literature. Moreover, in Appendix C, we have an extension to the multiclass case.
* We will add more detailed discussion in the related work section of the paper.
> Limitations of EPRM and why EasyLLP overcomes them.
The main difference between EPRM (aka Proportion Matching) and EasyLLP seems to be that the former can be catastrophically bad in non realizable settings (i.e., when the function $h^* : x \mapsto P(y=1\mid x)$ is not an element of the function class $\mathcal{H}$). On the other hand, EasyLLP is not relying on this assumption. When confronted with a non realizable setting, the EasyLLP solution will simply converge to the best in class solution within class $\mathcal{H}$. Yet, it should be added that Thm 3.2 and Cor 3.3 only provide *sufficient* conditions for EPRM to perform well. We do not know to what extent these conditions are also necessary, and in lines 192-201 we give an example (only verified empirically) where, if these conditions are not satisfied, the EPRM minimization criterion can cause learning to go completely off trail.
---
Rebuttal Comment 1.1:
Comment: Thanks for your response. I have read the response and other reviews.
I was expecting an experimental comparison with [16].
I will keep the score as is.
---
Reply to Comment 1.1.1:
Title: On the experimental comparison to [16] (and related papers)
Comment: Thank you for the response.
Due to the differences in problem setup between our work and that of [16], it is unclear how a fair comparison should be performed. In particular, in the third paragraph of section 4, the authors of [16] write: “Note that in most LLP papers, each $U$ set is uniformly sampled from the shuffled $U$ training data, therefore the label proportions of all $U$ sets are the same in expectation. As the set size increases, all the proportions converge to the same class prior, making the LLP problem computationally intractable. As shown above, our experimental scheme avoids this issue by determining valid class priors before sampling each $U$ set.”
Our work is in the LLP setting the authors describe where bags contain i.i.d. examples and all bags have the same class prior. As we have been trying to emphasize in our first response, this is quite different from the setup considered in the experiments of [16]. Instead, in [16] each bag ($U$ set) is assigned a random class prior $\pi$ from the interval $[0.1, 0.9]$, and then the $U$ set is filled with examples drawn from the mixture distribution given by $p_{tr}(x) = \pi p_p(x) + (1-\pi) p_n(x)$, where $p_p$ and $p_n$ are the conditional densities of $x$ for the positive and negative class, respectively. Our methods were not designed for the setting of [16], and the methods of [16] were not designed for our setting. | Summary: The paper aims to advance the theoretical understanding behind the LLP problem, and provide the conditions under which the algorithm is expected to work. They propose a theoretically founded algorithm for learning from label proportions called EasyLLP. In particular, they have shown how to estimate the expected value of any function of (x, y) pairs from labeled data. They have also shown complexity guarantees for ERM and convergence guarantees for SGD.
The authors have also evaluated their proposed approach against PropMatch and 2 baseline models, on 4 datasets: MNIST, CIFAR-10, UCI Adult, and Higgs. For all the datasets, the authors have considered the corresponding binary classification variant, if they were multi-class.
The results show
Strengths: The authors have clearly described their approach, and demonstrated the results against baselines on 4 datasets. Understanding the theoretical nature of LLP models is important given the application of the domain. The paper is a good step in that direction.
Weaknesses: In order to improve the reproducibility of the approach, it will be helpful if the authors can either share their code or provide pseudocode for using EasyLLP.
The evaluation is limited to fairly broad datasets such as MNIST, CIFAR and Higgs. The LLP problem is applied to various real world problems, including those rightfully noted by the authors, such as advertising. The datasets in such domains have additional challenges such as a more complex feature space and class imbalance. It will be great if the results can be shown with such complexities taken into account.
I'd encourage the authors to consider more recent approaches in LLP for comparing their results.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Concerns already listed under Weakness.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: n/a
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > On reproducibility
We find EasyLLP quite simple to reproduce, even without pseudocode. In any event, we will happily make the code for our experiments available.
> Limited experimental evaluation
We have strived to do as thorough an evaluation as possible by considering different datasets with different class proportions. Using advertising datasets can prove challenging, as there are no standard datasets we are aware of (for privacy and proprietary reasons) that can allow us to test the efficacy of our methods at predicting event level labels.
> Comparison to more recent approaches in LLP
We appreciate the suggestion and would find it very helpful if the reviewer could point us to the more recent approaches they are alluding to in their review.
---
Rebuttal Comment 1.1:
Comment: @Authors: Thanks for addressing my concerns. Based on your response above and other reviwer responses, I am revising my score. | Summary: The paper presents a debiasing approach called EASYLLP for Learning from Label Proportions (LLP), where only class label frequencies in bags are available. The authors provide theoretical analyses of a label proportion matching algorithm and propose a general debiasing technique for estimating instance loss. Experimental results demonstrate the effectiveness of their approach compared to existing methods in various learning frameworks.
Strengths: The proposed method in the paper has several advantages. Firstly, it is described as simple and straightforward, making it easy to implement. This aspect can be beneficial for practitioners and researchers looking for an accessible solution to the problem of Learning from Label Proportions (LLP). Additionally, the paper is praised for being well-written, indicating clear and concise explanations of the concepts and methods presented.
Weaknesses: My main concern is its relationship with previous works, such as [16] and references therein. Both of the papers assume data are generated at random, and propose a debiasing procedure via linear transformations, with similar theoretical results on consistency. It is difficult to see any fundamental innovation in the current work compared to previous works, especially given that all theoretical results are straightforward after confirming the consistency of the proposed risk.
For [16] and a more fundamental work On the Minimal Supervision for Training Any Binary Classifier from Only Unlabeled Data (ICLR19), they require separation of the class prior distribution, as argued in the paper, because they have a /theta-/theta' in the reweighting denominator, which could be reduced by multiplying them. The difference seems to be that the ICLR19 paper is calculating the expectation of risk using two sets, but the current paper is using one set; other derivatives are similar.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Compare with [16] and other works such as On the Minimal Supervision for Training Any Binary Classifier from Only Unlabeled Data (ICLR19), what are the fundamental differences of the current paper?
---------------------
After rebuttal, I am satisfied with the answer on the two key differences, and would like to raise my score.
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > On originality: Comparison to [16], and other papers like “On the Minimal Supervision…”
* We thank the reviewer for stimulating a more thorough discussion of the related literature. The papers the reviewer is mentioning are related to ours, but they are by no means subsuming our results. For instance, both [16] and the “minimal supervision…” paper (as well as other papers in this stream of literature), rely on two or more $U$ sets, which are assumed to be diverse in the prior mixture. It is this diversity that allows the authors to construct unbiased estimates and then derive consistency results. In our case, the bags have the same prior $p$, and we work under the assumption that we cannot handcraft diverse bags out of our samples, as the aggregation into bags is done without having access to the class conditional distributions $p(x|y=1)$ and $p(x|y=-1)$. This setting is largely motivated by practical scenarios where the learner may not have any control on how bags are generated. The difference between the two settings can also be observed in the different flavor of the consistency results. E.g., in [16] (even with $m = 2$ bags) the consistency limit has to be interpreted ``as the bag size $n_{tr} = n_1+n_2\ $ goes to infinity”. In our case, the bag size $k$ has to remain constant, and it is the number of bags that goes to infinity.
* One more thing that we also found different is that, unlike our paper, all results in these previous works, as presented, make assumptions about the loss function (e.g., square loss or cross entropy for [16], margin-based losses for the “minimal supervision…” paper). The fact that we make no assumptions allows us to apply our debiasing procedure to any function g(x,y) of two variables, hence we can debias, e.g., also the *gradient* of a loss function, enabling the principled usage of stochastic gradient descent procedures with only label proportion information. This is something that comes for free in our approach, that we have not seen in the literature. Moreover in Appendix C we have an extension to the multiclass case.
* We will add more detailed discussion in the related work section of the paper. | Summary: The authors start by providing a theoretical analysis of the proportion matching algorithm, a standard algorithm from the literature that simply minimizes the loss over the average instance-level predictions.
Strengths: I find the way the authors approach to the problem of learning from label proportions, and the goal they set out to achieve to be very worthwhile. I also believe the theoretical results therein might of interest to the community. I did, however, struggle at times in managing to follow the paper. Therefore, in my opinion, the authors really need to spend some effort polishing and refining the exposition to make it accessible to the broader community.
Weaknesses: - My main gripe with the paper is that I would've liked to see more discussion, or intuition following each Theorem, corollary or proposition. For instance, I really would've liked the authors to spend some time discussing the assumption upon which Theorem 3.2 hinges, and if such an assumption is expected to hold in practice (although I do appreciate them giving an example for when it fails due to the model class not containing the true conditional distribution)
- Empirical evaluation seems to suggest improvement only with large bag sizes (2^7), and on some dataset not at all, compared to the baselines.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: - Could you please say more regarding the sentence starting line 166? While the objective being approximation by equation (1) is clear to me, I'm not clear own what it means for equation (1) to "correctly approximate" it.
- In Figure 1, left plot, why does the prop matching loss start out much lower compared to other methods?
- Could you please elaborate on the statement of Theorem 3.2? In particular, I've been trying to conclude how strong of an assumption it is that the minimizer of the expected loss the average of $Z$ (I do believe it is very strong, and seldom holds in any realistic setting)
- In definition 4.1:
- Is $p$ as defined in the notation section or Theorem 3.2?
- I find the use of $\alpha$ here confusing, as my first instinct was to think of it as a label proportion, but in the notation section you only define $\alpha$ as a function of a bag. I then noticed that it is any real-valued parameter in $\[0,1\]$.
- Why does this definition make sense? I interpret this as somehow correcting the bias introduced by using bags instead of instances. If that is true, why does this correction term make sense?
- I think this definition requires some notion of distribution due to the presence of $p$?
- In proposition 4.2:
- the notation does not make it clear how $x_j$ is related to $\mathcal{B}$. I think something along the lines of $\mathbb{E}_{(x_j, \alpha) \sim (\mathcal{B}, \alpha)}$ would perhaps better convey your intent? But then I'm confused again, because somehow $\tilde{g}$ is a function of an individual instance and $\alpha$? But $\alpha$ is a function of the entire bag?
- Am I correct in reading the equations as saying that the soft-label corrected function averaged over all bags and proportions is simply equal to $g$ averaged over the instance-level data distribution? If so, then it is my opinion that this needs be framed and discussed since, from what I understand, your entire approach hinges upon this result, and it is by no means easy to see.
- I'm very confused by Equation 5,
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: Adequately addressed
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for pointing out a number of places where further discussion and intuition would improve the presentation, and we will add additional discussion to the paper.
> “Empirical evaluation seems to suggest improvement only with large bag sizes…”
Yes, we agree that EasyLLP only sees empirical improvements for large bag sizes (and sometimes we do not outperform PropMatch at all). However, given that the LLP problem becomes more difficult as the bag size grows, we expect differences between methods to become more pronounced at larger bag sizes. In the complete set of experiments in appendix section A.5, EasyLLP is the only method that is consistently competitive with other methods at every bag size (e.g., in Figure 4 we see that PropMatch and DA have relatively poor performance even at bag size $k = 8$ for the two ConvNet models). We also know that there are cases (e.g., the synthetic example from Section 3) where PropMatch converges to a high loss model and will not improve even with access to more data.
> “Could you please say more regarding the sentence starting line 166…”
We mean that if you find a classifier that minimizes the empirical proportion matching loss (Equation 1), then as long as you have enough data, it will also approximately minimize the population level loss (Equation 2). We will clarify the language.
> “In Figure 1, left plot…”
Unlike EasyLLP, the proportion matching loss has a different meaning (i.e., it is a measure of how close the model’s average prediction on a bag is to the bag’s label proportion). On the other hand, EasyLLP is an estimate of the *event* training loss (not averaged over the bag), so we should expect the EasyLLP training loss to track the event training loss, but for the proportion matching loss to be somewhat unrelated. This behavior is indeed what we observe in Figure 1.
> “Could you please elaborate on the statement of Theorem 3.2?”
* Theorem 3.2 has two key assumptions: 1. The function $h^* : x \mapsto P(y=1\mid x)$ is an element of the hypothesis class $\mathcal{H}$ (realizability assumption) and 2. The loss function $\ell$ has the property that for any random variable $Z$, $E[Z]$ minimizes the function $\rho \mapsto E_Z[\ell(\rho, Z)]$.
* The assumption about the loss is relatively mild, and in Corollary 3.2 (Line 181) we show that this holds for two commonly used losses: the binary cross-entropy and the squared losses.
> **Questions about Definition 4.1:**
> “Is $p$ as defined in the notation section or Theorem 3.2?”
$p$ is as defined in the notation section, which is the marginal probability that $y = 1$. We will change the symbol used in Theorem 3.2 to avoid the conflict.
> “I find the use of $\alpha$ here confusing…”
The soft-label corrected function takes as input a feature vector $x$ and a label proportion $\alpha$. The intended interpretation is that $x$ will be a feature vector from a bag $B$ of examples, and alpha will be the label proportion (which is a random variable) for that bag. We have somehow overloaded the notation by viewing $\alpha$ as both a random variable (notation section) and its value (Def. 4.1).
> “I think this definition requires some notion of distribution…”
You are right, this definition depends on the value $p$, which is the marginal probability that $y = 1$ for the underlying data distribution. We will clarify this.
> “Why does this definition make sense?”
The soft-label corrected function $\tilde g$ is defined this way so that the subsequent Proposition 4.2 (unbiasedness) holds.
> **Questions about Proposition 4.2**
> “The notation does not make clear how $x_j$ is related to $B$”
In the proposition there is a sample $(x_1, y_1), \dots, (x_k, y_k)$ drawn i.i.d. from the data distribution. Bag $B = (x_1, \dots, x_k)$ contains the feature vectors, and $\alpha = \frac{1}{k} \sum_{i=1}^k y_i$ is the proportion of the labels in the bag that are positive. The claim is that if we fix index $j \in [1, \ldots, k]\ $ and consider the $j$-th element $x_j$ in the bag, the expected value of $\tilde g(x_j, \alpha)$ (the expectation being w.r.t. the random draw of the labeled bag $(B,\alpha)$ or, equivalently, w.r.t. the random sample $(x_1, y_1), \dots, (x_k, y_k)$) is equal to the expected value of $g(x,y)$, where the expectation is w.r.t. a fresh sample $(x,y)$ from the data distribution.
> “But then I’m confused again because somehow $\tilde g$ is a function of an individual instance and $\alpha$?”
Yes, $\tilde g$ is a function of one feature vector $x_j$ from the bag, and the label proportion $\alpha$ of the bag. So, in a sense, $\tilde g$ is a function of the entire sample $(x_1,y_1), \ldots, (x_k,y_k)$ (via $x_j$ and $\alpha$).
> Am I correct in reading the equations as saying…”
* Yes, your interpretation is essentially correct. For any data distribution $D$ over $(x,y)$ pairs, there is a corresponding distribution over bags with label proportions (i.e., $B = (x_1, …, x_k)$ and $\alpha$). Proposition 4.2 shows that it is possible to estimate the expected value of $g$ on the distribution $D$ given sample access to the bag and proportion distribution by using the soft-label corrected function. We will add more discussion to the paper to better elucidate the correct interpretation.
* The paragraph on lines 252-260 was meant to outline how the soft-label corrected loss is applied in learning contexts, but we will certainly include some additional discussion following Proposition 4.2, as the reviewer suggests.
> “I’m very confused by Equation 5”
* Normally in empirical risk minimization, we find the classifier $h$ from a hypothesis space $\mathcal{H}$ that minimizes the loss on the training data. Equation 5 is the estimated training loss on a dataset of bags, where $x_{i j}$ is the $j$-th point of the $i$-th bag, and $\alpha_i$ is the label proportion for that bag.
* The rest of Sect. 5 studies how much worse it is to minimize Equation 5 compared to the actual training loss.
---
Rebuttal Comment 1.1:
Comment: Thank you for addressing my concerns. I have revised my score. | null | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Understanding Multi-phase Optimization Dynamics and Rich Nonlinear Behaviors of ReLU Networks | Accept (spotlight) | Summary: This paper conducts a comprehensive theoretical analysis of the training dynamics of two-layer ReLU neural networks on a linearly separable data. The authors isolate four discrete phases of the training procedure and describe specific nonlinear behaviors that occur during each phase. Moreover, they derive explicit formulae for the evolution of the network parameters and prove convergence results for the network's output. The paper's contributions include a complete theoretical characterization of the training process, a better understanding of the role of initialization and regularization, and insights into the generalization properties of ReLU networks. In brief, the research provides a valuable addition to the domain of deep learning theory and sheds light on the training process of ReLU.
Strengths: The originality of the paper lies in its complete theoretical characterization of the training process of the two-layer ReLU network. While previous work has focused on local analysis or approximate linear models, this paper provides a more comprehensive understanding of the optimization dynamics of ReLU networks. The authors identify four distinct stages of the training process and describe the specific nonlinear behavior that occurs at each stage. They also derive explicit formulations for the evolution of network parameters and demonstrate convergent results for network outputs. Besides, the authors provide rigorous mathematical proofs and derive explicit formulas that shed light on the complex dynamics of ReLU networks.
Weaknesses: First, a weakness of the paper is that it focuses on the specific setting of two-layer ReLU networks trained on linearly separable data. While this setting is useful for providing a full theoretical characterization of the training process, it may not generalize to more complex networks or datasets.
Second, another weakness of the paper is that it does not provide empirical verification of its theoretical results. While the authors provide rigorous mathematical proofs and derive explicit formulas, it would be useful to validate these results on real-world datasets.
Technical Quality: 4 excellent
Clarity: 3 good
Questions for Authors: Can the authors provide empirical validation of their theoretical results? While the authors provide rigorous mathematical proofs and derive explicit formulas, it would be useful to validate these results on real-world datasets.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 3 good
Contribution: 4 excellent
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate the reviewer's recognition of our work and helpful comments. Below, we offer detailed responses to the reviewer's questions:
**Q1. First, a weakness of the paper is that it focuses on the specific setting of two-layer ReLU networks trained on linearly separable data. While this setting is useful for providing a full theoretical characterization of the training process, it may not generalize to more complex networks or datasets.**
**Response.** As the reviewer commented, under such a relatively strict assumption on data and networks, we can capture the entire training process of the neural network, exhibiting multi-stage optimization dynamics and rich nonlinear behaviors. While our theorems might not hold for more intricate networks and datasets, similar multi-phase dynamics and nonlinear behaviors could still manifest. For a preliminary exploration of the extension of our findings, please refer to our **``Global'' Response to All Reviewers**. Furthermore, we are intrigued by exploring the complete training dynamics of more complex networks and datasets, focusing on characterizing new nonlinear behaviors. We leave this to future work.
**Q2. Second, another weakness of the paper is that it does not provide empirical verification of its theoretical results. While the authors provide rigorous mathematical proofs and derive explicit formulas, it would be useful to validate these results on real-world datasets. Can the authors provide empirical validation of their theoretical results?**
**Response.** We thank the reviewer for reminding us of more empirical validation. We have conducted more experiments to further support our theoretical results. For experimental results, please refer to our **`` Global'' Response to All Reviewers**. Regarding real-world datasets and networks, some previous works indicate similar nonlinear phenomena may still occur; please refer to our **response to Reviewer 79Zb's Q3**. Additionally, we will pursue theoretical investigations into these aspects in our future work.
---
Rebuttal Comment 1.1:
Comment: Many thanks to the authors for your careful explanation and detailed rebuttal. Based on the supplementary experiments given, I feel that this paper is of great help in understanding the nonlinear dynamics of neural networks.
---
Reply to Comment 1.1.1:
Title: Thanks
Comment: We thank the reviewer for the positive feedback of our response and the recognition of our work. We appreciate your support very much! | Summary: The training of ReLU neural networks involves complex nonlinear phenomena, challenging theoretical analysis due to the nonlinearity of models and non-convexity of loss. This study provides a comprehensive theoretical characterization of the training process of a two-layer ReLU network using Gradient Flow on linearly separable data. From random initialization to final convergence, the analysis identifies four different training phases indicating a trend from simplification to complication. Specific nonlinear behaviors, like initial condensation, saddle-to-plateau dynamics, plateau escape, activation pattern changes, and increasing learning complexity, are accurately captured.
Strengths: 1. The paper presents a complete theoretical characterization of the training dynamics of 2-layer ReLU networks, which addresses the nonlinearity and non-convexity challenges.
2. The precise characterization of four distinct phases of the training dynamics.
Weaknesses: - A missing related work on network's training dynamics and phases [1]
- Could authors run more hyperparameter settings (\kappa_1, \kappa_2, \delta, etc.) and see if simulated training times (T1 ~ T4) are aligned with estimated ones?
[1] "Neural Networks as Kernel Learners: The Silent Alignment Effect" Atanasov et al. 2021
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: - Do these four phases persistently exist across different settings (\kappa_1, \kappa_2, \delta, etc.)?
- For Phase III: “deactivation of positive neurons on $\mathbf{x}-$” — is this a sign of implicit bias of NN during gradient flow?
- Phase IV: “reactivation of negative neurons on $\mathbf{x}+$” — is this a sign of NN's overfitting?
- It is a bit surprising to me that network width ($m$) and learning rate ($\eta$) are not included in the form of T1~T4. Could authors intuitively explain why is it from the viewpoint of the proof strategy in this paper?
- One previous work on NN convergence stated that "strong correlations" (of input patches) "imply faster convergence", which contradicts the conclusions in this work (i.e. smaller $\delta$ indicates slower convergence). Could authors compare their work?
[2] "When is a Convolutional Filter Easy To Learn?" Du et al. 2017.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: - Assumption 3.1 is oversimplified. Basically, all training data collapse into two signals. How will the results change when we switch to a little bit more realistic setting by adding some stochastic noises on top of $\mathbf{x}+$ and $\mathbf{x}-$ to each data point?
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the appreciation of our work and helpful suggestions to improve this paper. In the following, we answer the reviewer’s questions in detail.
**Q1. A missing related work on network's training dynamics and phases [1].**
**Response.** We thank the reviewer for pointing out the interesting related work [1]. In [1], the authors also characterize multi-phase dynamics of training NNs and demonstrate that NNs in the rich feature learning regime learn a kernel machine due to the silent alignment phenomenon, similar to initial condensation phenomenon in our work. We will add and discuss this work in the revised version.
**Q2. Could authors run more hyperparameter settings and see if simulated training times are aligned with estimated ones? Do these four phases persistently exist across different settings?**
**Response.** We thank the reviewer's valuable suggestions on new experiments to supplement our theoretical results. To address this, we have conducted additional experiments under more hyperparameter settings (different $\Delta$, $p$, $\kappa_1$, and $\kappa_2$). These experimental results consistently validate the existence of four training phases in our theory and the reliability of our theoretical estimates on training time. For more details, please refer to our **``Global'' Response to All Reviewers**.
**Q3. Explanations for Phase III and IV.**
**Response.** We appreciate the reviewer's insightful perspective. We concur with the reviewer's understanding and will integrate this discussion into our revised version.
- In Phase III, neurons' deactivation is indeed an implicit bias of GF, which is a ``simplicity bias'' [3] more accurately. Additionally, GF does similar things in Phase I, simplifying the network through initial condensation.
- In Phase IV, neurons' reactivation is indeed an overfitting behavior because the ReLU activation patterns change from the ``simplest'' scheme to a more complex scheme (as discussed in Table 1).
- Combining these understandings, the evolution of activation patterns exhibits a simplifying-to-complicating learning trend.
**Q4. Why network width and learning rate are not included in the estimated times?**
**Response.**
- Learning rate $\eta$. Since our focus lies on GF, the version of GD with infinitesimal learning rate (equ (2) in Line 117), our results are independent of $\eta$.
- Network width $m$. We would like to provide an intuitive explanation for the influence of $m$ from our proof strategy.
- $T_I.$ In Phase I, we decompose neurons' dynamics into tangential and radial dynamics. For small initialization, during initial training, radial neuron growth is much slower than tangential velocity, resulting in directional condensation. For tangent velocity $\|d w_k(t)/d t\|$ (equ (7)), note: (i) at initialization, both $\kappa_2/\sqrt{m}\rho _k(0)=\kappa_2/\kappa_1$ and $F_k(0)$ are independent of $m$; (ii) During transient initial training, both the norm $\rho_k(t)$ and $F_k(t)$ stay close to initialization. Thus, tangent velocity is unaffected by $m$ during Phase I.
- $T_{II}$, $T_{III}$ and $T_{IV}$. In Phase II, III, and IV, we analyze $f( x_+;\theta(t))$ and $f( x_-;\theta(t))$. Notably, both $a_k$ and $b_k$ are normalized by $1/\sqrt{m}$. Thus, in terms of $m$, the prediction $f(x;\theta)=\sum_{k\in[m]}a_k\sigma(b_k^\top x)$ (on data $x$) has magnitude $m\cdot(1/\sqrt{m})\cdot(1/\sqrt{m})=1$, independent of $m$. Moreover, at the end of Phase I, living positive neurons (LPN) and living negative neurons (LNN) align with $\mu$ and $ x_+^\perp$, respectively. After Phase I, LPNs stay close, as do LNNs. So, for prediction, LPN and LNN can be conceptually treated as a single positive neuron and a single negative neuron, respectively, independent of $m$. A similar idea is present in the neuron embedding technique in [3].
**Q5. Comparsion with the previous work [2] ``strong correlations'' (of input patches)**
**Response.** We thank the reviewer for raising this intriguing question, and we provide the following response:
- In [2], the authors mentioned that when utilizing a CNN to extract a single feature $w^*$, optimization becomes easier with higher correlation between patches $\arg\left<Z_i,Z_j\right>$, and between patch and target $\arg\left<Z_i,w^*\right>$ (as shown in their Figure 1). The underlying reason lies in the context of a regression problem that involves only a ``single class of data'' (single feature).
- However, our setting is binary classification with two distinct classes of data ($\pm 1$), and the correlation between two classes should be characterized by label-weighted data angle: $\arg\left<1\cdot x_+,-1\cdot x_-\right>=\pi-\Delta$. Thus, smaller $\Delta$ means weaker data correlation. Moreover, our theory and experiments validate that larger $\Delta$ (indicating stronger data correlation) yields easier training.
- Therefore, the conclusions of our work and [2] are consistent, both implying: ``strong correlations imply faster convergence''.
**Q6. Relax assumption 3.1 to noisy data.**
**Response.** We thank the reviewer for offering this insightful suggestion to generalize our findings to noisy data. We have conducted additional experiments on our dataset with some stochastic noises on top of $x_+$ and $x_-$. Our numerical results illustrate that the same four-phase optimization dynamics and similar nonlinear behaviors persist for noisy data. For more details, please refer to our **``Global'' Response to all Reviewers**.
[1] Atanasov et al. Neural Networks as Kernel Learners: The Silent Alignment Effect. (ICLR 2022)
[2] Du et al. When is a Convolutional Filter Easy To Learn? (ICLR 2018)
[3] Lyu et al. Gradient Descent on Two-layer Nets: Margin Maximization and Simplicity Bias. (NeurIPS 2021)
---
Rebuttal Comment 1.1:
Title: Thanks for the response
Comment: I appreciate the authors' response, and thus I would like to raise my score.
Another question: in the global response Table 2, why did authors only compare $T_{\text{plat}}$ and $T_{\text{III}}$, instead of $T_\text{I}$ and $T_\text{II}$?
---
Reply to Comment 1.1.1:
Title: Thanks
Comment: We sincerely thank you for your time and effort in reviewing our paper. Moreover, we would like to reiterate our gratitude for your valuable recommendation on additional experiments and thank you for raising the scores!
**Response to another question.**
- In Table 2 in the global response, we focus on the change of our theoretical bounds under different $p$ and $\Delta$ (data-dependent hyper-parameters). Due to space limitations, we are constrained to showcasing **two most representative** time points.
- The reason for selecting $T_{\rm III}$ and $T_{\rm plat}$ is as follows: According to our theory,
(1) $T_{\rm II}$ and $T_{\rm III}$ have the **same** rate in terms of $p$ and $\Delta$, so we choose only one of them for presentation.
(2) $T_{\rm plat}$ and $T_{\rm III}$ have **different** rates in terms of $p$, and $T_{\rm plat}$ can clearly reflect the "harm" of larger $p$ and smaller $\Delta$ on training accuracy. Hence, we choose to show $T_{\rm plat}$.
(3) $T_{\rm I}$ is extremely short and remains **unaffected** by $p$ and $\Delta$, hence it is not included in the PDF for presentation.
- In our revised version, we will present the **complete** numerical results, i.e. the change of {$T_{\rm I}, T_{\rm II}, T_{\rm plat}, T_{\rm III}$} under different {$p,\Delta,\kappa_1,\kappa_2$}. | Summary: This paper aims to provide a theoretical understanding of the dynamics involved in training neural networks beyond the linear regime. The authors focus on a specific scenario where a two-layer ReLU network is trained using Gradient Flow (GF) on linearly separable data. The analysis encompasses the entire optimization process, starting from random initialization and concluding with final convergence.
Despite the simplicity of the model and data used in the study, the authors uncover multiple phases within the training process. By conducting a meticulous theoretical analysis, they precisely identify four distinct phases that exhibit various nonlinear behaviors.
In Phase I, the initial stage of training, there is a phenomenon of condensation and simplification as active neurons rapidly gather in two different directions. Simultaneously, the GF successfully escapes from the saddle point around the initialization.
Phase II involves a prolonged period where the GF becomes trapped in a plateau of training accuracy. However, it eventually manages to escape from this stagnation.
During Phase III, a significant number of neurons are deactivated, leading to self-simplification of the network. The GF then adapts its learning approach using this simplified network configuration.
In Phase IV, a considerable number of previously deactivated neurons are reactivated, resulting in self-complication of the network. Finally, the GF converges towards an initialization-dependent direction.
Overall, the training process exhibits a remarkable behavior of transitioning from simplification to complication. The detailed analysis of each phase sheds light on the intricate dynamics involved in the learning process beyond the linear regime.
Strengths: The manuscript demonstrates excellent organization, comprehensiveness, and clarity, effectively covering crucial aspects such as a thorough review of related works and a comprehensive discussion of limitations. Moreover, the paper stands out by successfully combining empirical and theoretical approaches.
Overall, this work makes significant contributions that advance the current state of the literature.
Weaknesses: These suggestions are intended to improve the overall clarity and accessibility of the research.
The manuscript does not provide sufficient details regarding the resources and computational aspects involved in the research. Additionally, the work would greatly benefit from a clear delineation of its limitations.
Outlined below are some limitations that underscore the weaknesses of the paper, despite its current status as the best effort in the field.
The paper lacks a significant number of experimental results to demonstrate the robustness of the findings. It would greatly enhance the study's credibility to include additional examples where the predictions have been validated and to provide measures of their robustness.
The submission would be strengthened by the addition of the code ran for the experiments
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: Dear authors, I have a few questions regarding your work that would greatly aid my comprehension:
How reliable are the predictions made in this study, and what was the extent of experimentation conducted to support them?
Can general principles be derived from this research that would benefit machine learning practitioners?
Have alternative types of architectures been tested to determine if the findings hold true across different models as well (i.e., more realistic architectures and data sets)?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations: The limitations of the proposed method are summarized as follows:
* Narrow Scope: This study focuses exclusively on ReLU neural networks trained with Gradient Flow (GF) on linearly separable data. Although the analysis comprehensively captures the optimization process and identifies four distinct phases with rich nonlinear behaviors, the generalizability of the findings to other neural network types or more complex datasets may be limited.
* Limited Generalization to Gradient Descent (GD): While the paper provides a detailed analysis of GF dynamics, it acknowledges that Gradient Descent (GD) dynamics are more complex and can exhibit additional nonlinear behaviors like progressive sharpening and the edge of stability. Consequently, a comprehensive understanding of the nonlinear behaviors during GD training is not fully explored in this study, indicating the need for future research in this direction.
* Theoretical Understanding of Neural Network Training: While this work makes significant strides in theoretically understanding the training dynamics of neural networks, it acknowledges that there is still much progress to be made in fully comprehending the entire training process. Theoretical advancements related to nonlinear behaviors in neural network training, beyond the specific focus of this study, present valuable opportunities for future investigations. With this in mind I would like to suggest works like' The Neural Race Reduction: Dynamics of Abstraction in Gated Networks' Saxe 2022, that get analytical solution to 'Relu like' Neural Networks.
Flag For Ethics Review: ['No ethics review needed.', 'Ethics review needed: Responsible Research Practice (e.g., IRB, documentation, research ethics)']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for appreciating our work and for pointing out the relevant papers. We answer the reviewer’s questions in the following.
**Weakness on experimental results and details.**
**Response.** We thank the reviewer's suggestions on experiments to improve our paper.
- **More experiments.** We have conducted more experiments to verify the robustness of our theoretical results under more hyperparameter settings. Please refer to our **``Global'' Response to All Reviewers** for further details.
- **Code.** Following the rebuttal rule in NeurIPS 2023, we have sent an anonymized link to the AC in a separate comment encompassing our standard code along with details of computational resources.
**Q1. How reliable are the predictions made in this study, and what was the extent of experimentation conducted to support them?**
**Response.** We have conducted more experiments to verify our theory, please refer to our **``Global'' Response to All Reviewers**.
**Q2. Can general principles be derived from this research that would benefit machine learning practitioners?**
**Response.** Through our theoretical analysis, we suggest that the following insights could offer guidance to practitioners:
- **Use Balanced data by preprocessing.** Our focus is the binary classification problem a small ``margin'' between two data classes and a slight imbalance ($p=n_+/n_->1$). These factors can lead to training challenges and undesirable behaviors such as plateau. Our theory established that training accuracy platea in Phase II persist for $T_{plat}=\Theta(p/\kappa_2^2\Delta^2)$ (Theorem 4.5), which is proportional to $p$ and inversely proportional to $\Delta^2$. Our experiments support this, revealing that even a slight data imbalance ($p=4$) with $\Delta=\pi/15$ can result in a significantly long plateau. Therefore, for practitioners, although regulating $\Delta$ for multi-class data is complex, the data imbalance can be resolved by simple preprocessing (equalizing the numbers of data in each class), which can reduce the time of plateau and accelerate the rise in training accuracy.
- **Proper early stopping.** Our theory reveals that neural network training follows a simplifying-to-complicating dynamics learning trend. Initially, the network simplifies itself in Phase I and III, resulting in the ``simplest'' pattern at the end of Phase III (living positive neurons exclusively predict $x_+$, while living negative neurons solely predict $x_-$). However, in Phase IV, living negative neurons revert to predicting $ x_+ $, and the network's pattern becomes more intricate, which can be interpreted as overfitting. Therefore, proper early stopping is necessary to mitigate overfitting in real-world tasks.
**Q3. Have alternative types of architectures been tested to determine if the findings hold true across different models as well (i.e., more realistic architectures and data sets)?**
**Response.** For more practical architectures and datasets beyond our specific settings, while strict theorems might not hold, similar training behaviors persist, as evidenced by the following studies:
- *Initial condensation* occurs in multi-layer fully-connected neural networks [2];
- *Directional convergence* is demonstrated in homogeneous deep neural networks [3];
- *Saddle-to-saddle* dynamics are found in deep linear neural networks [4];
- *Learning with increasing complexity* is observed in realistic datasets (such as CIFAR-10) and various architectures (FNNs, CNNs, and ResNets) [5].
These studies provide valuable insights into the generalizability of the training behaviors discussed in our work.
**L1. Narrow Scope.**
**Response.** As the reviewer commented, under such a relatively strong assumption, our work completely characterizes the entire multi-phase optimization dynamics and exhibits rich nonlinear phenomena. For more complex network architectures and datasets, while strict theorems might not be applicable, our **response to Q3** demonstrates the persistence of similar phenomena. Additionally, we will work to relax this assumption conditionally. For example, our new experiments suggest that training accuracy falls into more plateau during training for noisy data. For more details, please refer to our **``Global'' Response to All Reviewers**. For a more in-depth study, we leave it to future work.
**L2. Limited Generalization to Gradient Descent (GD).**
**Response.** As mentioned by the reviewer, GD can exhibit more nonlinear behaviors, such as progressive sharpening and the edge of stability. Technically, analyzing these phenomena is more difficult compared to GF due to the consideration of appropriate learning rate and stability conditions. We leave analyzing GD's training dynamics for future work.
**L3. Theoretical Understanding of Neural Network Training.**
**Response.** As the reviewer commented, there is still a long way to go to study the training dynamics of neural networks. We thank the reviewer for pointing out the interesting related work [1]. The neural race introduces a novel implicit bias of learning dynamics: toward shared representations. This idea and the view of gating networks are very enlightening for extending our two-layer theory to deep ReLU neural networks and understanding the dynamics of deep ReLU networks. In our revised version, we will add and discuss this work.
[1] Saxe et al. The Neural Race Reduction: Dynamics of Abstraction in Gated Networks. (ICML 2022)
[2] Zhou et al. Towards Understanding the Condensation of Neural Networks at Initial Training. (NeurIPS 2022)
[3] Ji and Telgarsky. Directional convergence and alignment in deep learning. (NeurIPS 2020)
[4] Jacot et al. Saddle-to-Saddle Dynamics in Deep Linear Networks: Small Initialization Training, Symmetry, and Sparsity. (aXiv 2021)
[5] Nakkiran et al. SGD on Neural Networks Learns Functions of Increasing Complexity. (NeurIPS 2019)
---
Rebuttal 2:
Comment: I want to express my gratitude to the authors for their meticulous clarification and thorough response. I raised my evaluation and confidence scores to acknowledge the enhanced clarity and presentation, along with a deeper comprehension of the contribution.
---
Rebuttal Comment 2.1:
Title: Thanks
Comment: We would like to express our gratitude to the reviewer for the positive feedback of our response, and thank you for raising the scores! | Summary: In this work, the authors attempt an exact analysis of the training dynamics of 2-layer ReLU networks trained via gradient flow on linearly separable data. Specifically, the authors aim to build on related work (e.g., Boursier et al. [2022] on square loss) to the case of:
- Exponential loss (a more appropriate characteristic of classification problems)
- With data having mild orthogonal separability
Due to the additional non-linearity introduced by this specific loss type, and the complex data structure, the authors aim to characterize a richer non-linear structure.
By considering a 2-layer network, where the second-layer weights are held fixed, the authors demonstrate that under gradient flow, the tunable weights of the network evolve in 4 distinctive phases, indicative of a simple to complex learning phenomena.
Strengths: - The authors extend the analytical formulation established in related previous literature to the case of the harder exponential loss type, which categorizes standard classification loss
- The authors relax the data orthogonality assumption utilized in these previous works by considering a case where the angle of separability between data points is $<90^\circ$
- The authors demonstrate additional data condensation and alignment phases over existent work at the onset and the end of the training, thus attributing to the aim of capturing richer non-linear phenomena.
- The authors establish bounds on the measure of the bounds on the count of the tunable *positive/negative* neurons in each phase along with the same on their norms, the time extent of these phases.
Weaknesses: 1. While the authors do draw similarities between their work and that of [Boursier et al. [2022]]*, they refer to the latter as performing substantial simplifications and, therefore, unable to capture a lot of non-linear phenomena. Nevertheless, the authors of this work too adopt an identical model and initialization strategy as in the above-cited work.
1.1 **Most importantly** in Assumption 3.1, are the authors comprising a dataset using **just two data points**? If so, then that is an exceptionally restrictive assumption.
1.2 The reference to the usage of data averaged direction over the Gram-Schmidt type orthogonal direction, between that work and this, as defining a key improvement, is again not entirely justified if 1.1 above is true. In the case of a set of *non-degenerate* data vectors, defining *useful* orthogonal directions is challenging
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: - Can you detail how In Fig. 1, the projection to a 2D subspace span was performed?
- If the specific dot product of Line 151 has to be true, then data should be normalized, yet that is not mentioned in Assumption 3.1
- Can you explain the reason behind the assumption on $T_1$ in Line 191?
- Can you at least intuitively explain why the positive neuron has initial condensation/alignment with the label averaged direction $\mu$ and *then* transition to the expected $x_{-}^{\perp}$ direction? On a similar note, is there a reason only the negative neurons eventually *converge toward some specific directions dependent on both data and initialization*? Is it due to $p>1$? If so, I would assume the alignment signal impacts the positive labels over the negative ones, though.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 3 good
Contribution: 2 fair
Limitations: See Questions and Weaknesses
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for appreciating our work, as well as the valuable comments. We address the questions in the following.
**Response to Weakness 1.**
- **Main advantage over [1].**
- We acknowledge the similarity in our network and initialization strategy as [1]. However, our analysis takes a step further by delving into more intricate exp-type loss and non-orthogonal data. These aspects empower us to unravel richer nonlinear training behaviors.
- While our initial alignment directions ($\mu,x_+^\perp$) differ from [1], we don't consider this point as our main advantage. Initial condensation is broadly observed in reality, manifesting in various directions [2]. As the reviewer mentioned, determining condensation directions for realistic settings, such as useful orthogonal ones, is an important topic for future work.
- Our main advantage over [1] is that we capture richer nonlinear behaviors, such as Neuron Reactivation, Staged feature learning, and Initialization-dependent directional convergence. Please refer to our Section 5 for more details.
- **About Assumption 3.1.**
- We acknowledge the importance of this assumption in our theory. However, a complete analysis under such a relatively strong assumption is also of great interest to understanding the neural networks’ training dynamics and nonlinear behaviors.
- Our motivation is to illustrate that even in a simple setting, GF could exhibit *numerous nonlinear behaviors* that researchers have speculated about, such as saddle-to-plateau, simplifying-to-complicating, etc.
- We will work to relax this assumption conditionally. Inspired by reviewers' comments, we have explored a slight relaxation of this assumption by perturbing $x_+$ and $x_-$ with noise, and our experimental results illustrate that similar four-phase dynamics and nonlinear behaviors still exist. For details, Please refer to our **``Global'' Response to All Reviewers**. For a more in-depth study, we leave it to future work.
**Q1. The projection in Fig 1.**
**Response.** WLOG, we can let $x_+=(\sin(\Delta/2),-\cos(\Delta/2),0_{d-2}^T)^T, x_-=(-\sin(\Delta/2),-\cos(\Delta/2),0_{d-2}^T)^T$. In this case, the subspace D=span{$x_+,x_-$} is span{$e_1,e_2$}. For k-th neuron at time t, denoted as $b=(b_1,\cdots,b_d)^T$, its projection onto D is $(b_1,b_2,0_{d-2}^T)^T$. Hence, its polar coordinates is $(\rho,\theta)=(\sqrt{b_1^2+b_2^2},\arctan(b_2/b_1))$.
**Q2. Data normalization.**
**Response.** Yes, the data is normalized. In Ass 3.1, the requirement $x_+,x_-\in S^{d-1}$ (Line 136) is actually the $l_2$ normalization.
**Q3. Explain $T_I$.**
**Response.** First, we clarify that $T_I=10\sqrt{\kappa_1/\kappa_2}$ is a result derived in our proof rather than an assumption. By our analysis, at $T_I$, initial condensation can be perfectly completed. Second, we can offer an intuitive explanation for the positive correlation between $T_I$ and $\kappa_1$, and the negative correlation between $T_I$ and $\kappa_2$. For small initialization, during initial training, neurons' radial growth is much slower than their tangential speed, leading to directional condensation. For tangent velocity $\|dw_k(t)/dt\|$ (equ(7)), notice that (i) $\kappa_2/\sqrt{m}\rho _k(0)=\kappa_2/\kappa_1$; (ii) during transient initial training, $F_k(t)$ mostly stays constant, independent of $\kappa_1$ and $\kappa_2$. Combining these facts, we see that smaller $\kappa_1$ and larger $\kappa_2$ correlate with greater tangential speed, leading to smaller $T_I$.
**Q4. Intuitive explanations for two dynamics.**
**Response.** We agree with the reviewer that it is important to provide intuitive explanations for these dynamics. We will add this aspect in our revised version.
- **Some positive neurons (PN) moves from $\mu$ to $x_-^\perp$.** We will provide an intuitive understanding. For rigorous proof,please refer to Lemma C.3 ~ Lemma C.10. For PN s.t. $w_k(0)^Tx_+>0,w_k(0)^Tx_->0$, it holds:
- During Phase I, tangential velocity is much larger than radial velocity, and vector field $F_k(t)\approx\mu$. Thus PN rapidly align well with $\mu$. Concurrently, some negative neurons (NN) align with $x_+^\perp$. Additionally, since living PN are much closer to $x_-$ than living NN at $T_I$, the prediction of $x_-$ is incorrect ($f_-(T_I)>0$).
- After Phase I, to correct the prediction of $x_-$, living PN gradually move away from $x_-$ and decrease positive $w_k^Tx_-$. Notably, living PN can’t move into {$w:w^Tx_-<0$} because the vector field abruptly changes at boundary {$w:w^Tx_-=0$} (due to ReLU’s non-smoothness), redirecting PN to {$w:w^Tx_->0$}. Thus, living PN eventually satisfy $w_k^Tx_-=0$. Notably, vector field $F_k(t)$ lies in the subspace span{$x_+,x_-$}, causing living PN to reach $x_-^\perp$.
- **Only living negative neurons converge toward directions dependent on initialization.** We will intuitively explain this via "simplicity bias". For rigorous proof, please refer to our proof of Theorem 4.10.
- In Remark 4.2, we demonstrated that after Phase I, the number of living positive neurons (LPNs) exceeds living negative neurons (LNNs).
- At the end of Phase III, the network has the simplest patterns: LPNs exclusively predict $x_+$, while LNNs solely predict $x_-$. However, prediction $f_-(t)$ evolutes much slower than $f_+(t)$, disliked by GF. Then GF aims to rectify this speed imbalance through a simple approach -- by adjusting directions of fewer LNNs while preserving LPNs' directions. Precisely, for LNNs, GF increases $w_k^Tx_-$ and reactivates on $x_+$.
- Hence, as shown in Thm 4.10, different ratios $\alpha$ between the numbers of LPNs and LNNs can yield different convergent directions of LNNs, without affecting LPNs.
[1] Boursier et al. Gradient flow dynamics of shallow relu networks for square loss and orthogonal inputs. (NeurIPS 2022)
[2] Zhou et al. Towards Understanding the Condensation of Neural Networks at Initial Training. (NeurIPS 2022) | Rebuttal 1:
Rebuttal: **``Global'' Response to All Reviewers.**
1. First, we sincerely thank all the reviewers for appreciating our result, i.e., a theoretical analysis of multi-phase optimization dynamics and the rich nonlinear behaviors of ReLU networks. We also thank all the reviewers for their comments and suggestions to improve our paper. In our revised version, we will correct all typos, provide complete experimental settings and results, and incorporate the discussions with the reviewers.
2. **More Experiments.** In response to multiple reviewers' suggestions regarding the need for further numerical validation of our theory, we have included two additional experiments to showcase the effectiveness of our theoretical findings. Below, we summarize the experimental setups, results, and conclusions. As for detailed experimental results, please refer to our updated **one-page PDF**.
- **Experiment 1: More hyperparameter settings.**
- **Setup.** We run experiments under more hyperparameter settings (different $\Delta$, $p$, $\kappa_1$, $\kappa_2$). We aim to verify whether the same four training phases persistently exist and evaluate the consistency between our theoretical bounds ($T_{\rm I}$, $T_{\rm plat}$, $T_{\rm II}$, $T_{\rm III}$) and empirical outcomes.
- **Result.** Due to space constraints, a subset of the results is presented in **Table 2 in one-page PDF**. Precisely, the table displays outcomes under different $\Delta$ and $p$ (data-dependent hyperparameters), and we focus on the changes of $T_{\rm plat}$ (time plateau of training accuracy) and $T_{\rm III}$ (ending time of Phase III).
- **Conclusion.** From our numerical results in Table 1 in one-page PDF, we have two main conclusions: (i) four training phases in our theory persistently exist; (ii) our theoretical estimates on $T_{\rm plat}$ and $T_{\rm III}$ demonstrate noteworthy alignment with realistic results (in terms of $p$ and $\Delta$).
In the revised version, we will present the complete results.
- **Experiment 2: Noisy Data.**
- **Setup.** We conduct numerical experiments on the setting of adding small stochastic noise on top of $x_+$ and $x_-$, a little bit more realistic setting. Specifically, in span{$x_+,x_-$}, we perturb the angles of data $x_+$ and $x_-$ by noise $\xi\sim{\rm Unif}([0,\Delta/4])$.
- **Result.** In **Figure 5 in one-page PDF**, we visualize (i) the evolution of each neuron throughout the training process; (ii) some key data directions; (iii) the evolution of training accuracy. To compare these results with noiseless data, the reviewers can refer to our previous Figures 3 and 4 in Appendix A.
- **Conclusion.** From our numerical results in Figure 1 in the one-page PDF, we have two main conclusions: (1) we ascertain that the same four-phase optimization dynamics and nonlinear behaviors persist, even for our dataset with small stochastic noise; (2) a slight difference is that there is more than one plateau of training accuracy in Phase II. The reason is that for noisy data, GF needs to learn negative data $(y=-1)$ one by one in Phase II. For example, three distinct negative data are employed in this experiment, so three plateaus of training accuracy emerge (12/15, 13/15, and 14/15).
3. We have addressed every concern raised by each reviewer through separate responses provided below.
Pdf: /pdf/d7384883778a209b4dcd3773f434971635266e1d.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Accountability in Offline Reinforcement Learning: Explaining Decisions with a Corpus of Examples | Accept (poster) | Summary: This paper poses the problem of learning a controller from a batched dataset containing time-series observations of the world, actions, and a value estimate (e.g., for determining how to treat a patient given the results of medical tests based upon patient outcomes). This problem is considered with a POMDP formalism with execution traces. The paper imposes assumptions about the linearity of the mapping from the belief space to the value space in order to create some analytical properties about how wrong the model might be or to mitigate the model error. The algorithm, ABC, is evaluated against baselines in a set of batched-data experiments, including in a healthcare applicaiton.
Strengths: +The paper addresses an important problem of bringing safety to machine learning
+The paper clearly states assumptions and presents logical arguments and definitions to support its thesis.
+The evaluation shows positive results and does so in important domains (e.g., healthcare)
Weaknesses: -The paper seems to be addressing the problem of offline reinforcement learning without actually addressing offline reinforcement learning. Though, this paper does cite a plethora of papers that address this topic. As such, it is difficult for the reviewer to properly contextualize this paper in this relevant prior work.
-The paper appears to be missing a number of baselines for offline reinforcement learning, as shown below. Some of the baselines chosen do not have access to the same set of information available to the ABC algorithm (e.g., the BC model).
Chen, L., Lu, K., Rajeswaran, A., Lee, K., Grover, A., Laskin, M., Abbeel, P., Srinivas, A. and Mordatch, I., 2021. Decision transformer: Reinforcement learning via sequence modeling. Advances in neural information processing systems, 34, pp.15084-15097.
Kumar, A., Zhou, A., Tucker, G. and Levine, S., 2020. Conservative q-learning for offline reinforcement learning. Advances in Neural Information Processing Systems, 33, pp.1179-1191.
-The paper develops theory to bring accountability to this problem. However, the results section provides some relatively simple computational examples and a qualitative description that is in the eye of the beholder. It would have been better to provide a clearer, more convincing test to demonstration that there are clear guarantees and fulfilled analytical properties.
-One could have considered a human-subject experiment to evaluate whether this approach is really "accountable." Literature on accountability could have been considered as well. For example, see:
Kim, B. and Doshi-Velez, F., 2021. Machine learning techniques for accountability. AI Magazine, 42(1), pp.47-52.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: -Why is this problem different than offline reinforcement learning?
-Why are offline RL baselines not included?
-How do the results provide convincing evidence of accountability in a non-superficial manner?
-For what class of problems is the linear assumption for the mapping from belief to value reasonable?
-What is the computational complexity of the approach?
-How scalable is the approach with the size of the state space?
-By relying on belief space modeling, why is this approach relevant to large, real-world problems?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 2 fair
Limitations: The paper does not mention "limit" once.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the time and effort in reviewing our paper. We will respond to each question in turn:
---
### Q1: Why is this problem different than offline reinforcement learning?
A1: In this paper, we study the accountability of offline decision-making in high-stake systems, and applied the proposed method to real-world healthcare dataset. ABC performs a form of offline reinforcement learning, but unlike existing work on offline RL, it seeks to achieve P1-P5 below.
As we have emphasized in our abstract, introduction, related work, and experiment section that what makes ABC different from existing literature including Offline-RL is its 5 properties:
- P1: controllable conservation to avoid aggressive extrapolation;
- P2: accountability that provides a decision basis;
- P3: suitability for low-data regimes;
- P4: adaptability to user specifications that allows customization;
- P5: flexibility in strictly batched imitation settings for broader applicability.
Out of those 5 properties, **Offline-RL only satisfies P1**.
We also discussed the similarities and differences of ABC to Offline-RL in the section of Extended Related Work (Appendix B).
### Q2: Why are offline RL baselines not included?
A2: The only experiment in which Offline-RL could be compared with ABC is Sec.5.1. In all other experiments, Offline-RL does not enjoy the properties we discussed in the paper. Nor do those Offline-RL algorithms address the issue of accountability in decision-making.
We **conducted additional experiments** to better address the reviewer's concern and provide **results in the attached PDF file** due to space limitations.
### Q3: How do the results provide convincing evidence of accountability in a non-superficial manner?
A3 : (1) In our main text, we use Sec.5.3 to highlight the accountability of the proposed method. Where we can have the conclusion that ABC's decision-making process exhibits strong accountability, as the **corpus subset can be tracked at every decision-making step**. This is evidenced by ABC's successful completion of a multi-stage Maze task, wherein it effectively learns from mixed trajectories generated by multiple policies at differing stages.
(2) Apart from the above qualitative results that permit visualization, we also provided additional empirical evidence on the property of accountability in Section 5.5, where we study the real-world healthcare application of ABC, and highlight how **accountability helps to identify boundary examples in decision-making**.
(3) Moreover, in Appendix F.5, we demonstrate how to leverage the **accountability of ABC to identify OOD examples**, this is another evidence showing the accountability of our proposed method.
### Q4: For what class of problems is the linear assumption for the mapping from belief to value reasonable?
A4: **Our Remark 3.3 answers this question.**
The linear relationship between the latent belief space and the value is a property rather than an assumption. As we’ve stated in Remark 3.3, this property is often the case with neural network approximators, where the belief state is the last activated layer before the final linear output layer.
### Q5: What is the computational complexity of the approach?
A5: **Our discussion in Appendix D.3 and D.5 can answer this question.**
Due to the page limit, we postponed the discussion of computational complexity, hardware requirement, and wall-clock running time in Appendix D.3 and D.5, respectively.
With our proposed solution, the convex hull decomposition takes less than 10 seconds with a uniform sampler that samples 100 actions randomly for every time step. Increasing the number of sampled actions will lead to a sub-linear increase in computational time with parallelization.
### Q6: How scalable is the approach with the size of the state space?
A6: **Our experiments in Appendix F.3-F.5 are designed to answer this question.**
**ABC can work both in isolation or combined with black-box policies.** ABC can be used as a plug-in to add accountability to black-box controllers in a post-hoc manner. In high-dimensional control tasks, uniform sampling can be inefficient and black-box samplers can alleviate such a difficulty.
### Q7: By relying on belief space modeling, why is this approach relevant to large, real-world problems?
A7: In this study, we examine the batch control problem, which holds **significant potential for applications in costly, risk-sensitive domains such as healthcare and finance.** While previous works have primarily focused on efficient learning in batch settings, the accountability of offline decisions remains largely unexplored despite its importance.
In healthcare, it's vital that decisions are based on a supportive basis. For instance, **when a patient is treated in a certain manner, it should be based on the successful outcomes of previous patients with comparable conditions who received the same treatment.** The ability to trace the supportive basis of decisions enhances the process of policy reasoning and debugging, thereby improving the trustworthiness of decision-making systems.
While technically ABC is built on top of belief space modeling, the accountability generated by ABC is instance-level based on the one-to-one mapping between examples and belief states. Hence, **the use of belief space modeling in ABC does not restrict its applicability or effectiveness in real-world problems.** Therefore, the modeling of belief space enables a more nuanced understanding of individual decisions, aligning with the complexity of real-world scenarios.
### Q8: The paper does not mention "limit" once.
A8: In fact, we do have a discussion section **Appendix G. Limitations and Future Work** on Page 27 of our paper. We have updated our conclusion and refer readers to Appendix. G for limitations.
---
We hope that these clarifications address your concerns, and we are happy to have further discussions would they remain unclear.
---
Rebuttal Comment 1.1:
Title: Missing attachment
Comment: Unless I am mistaken, I believe the authors were going to attach a document: "We conducted additional experiments to better address the reviewer's concern and provide results in the attached PDF file due to space limitations."
However, I don't believe I see this document. Can the authors confirm?
---
Reply to Comment 1.1.1:
Title: Response to Reviewer qevo
Comment: Dear Reviewer qevo,
We sincerely thank you for your follow-up response.
According to the rebuttal guideline, we provided the one-page pdf attachment in the **overall response**. Please see the **Author Rebuttal by Authors** at the very top of this page, and the attachment we mentioned can be downloaded there.
Many thanks for your consideration. Please let us know should there be any additional questions or concerns, we are more than willing to provide further explanations.
Regards,
Authors | Summary: This paper investigates imitation learning in scenarios with limited data. The proposed approach (ABC) involves utilizing a linear combination of the belief space to generate accountable decisions.
The authors evaluate the performance of ABC in simulated and real-world healthcare scenarios, highlighting its ability to effectively handle batched control tasks while maintaining a high level of performance and accountability.
Strengths: 1. The paper is presented in a clear and easily understandable manner.
2. This article investigates a highly meaningful direction, which generates reliable strategies through offline data.
Weaknesses: 1. Assuming that the value is a linear combination of belief states is a strong assumption, which may not hold true in certain tasks, particularly those that are harder and more complex.
2. The optimization problem (Eq. 9) seems to be ill-defined. Eq. 9 could have an infinite number of solutions because the effect of $l\circ b$ is equivalent to $cl \circ c^{-1}b$, where $c$ represents a scalar. This would have an impact on the distance calculation when solving Eq. 10. It would be beneficial to address the limitations of the optimization procedure and conduct additional experiments to examine the influence of different solutions to the optimization problem (Eq. 9).
3. The paper lacks a discussion and comparison with offline reinforcement learning, which is an important related work and a strong baseline.
4. The evaluation is confined to simple environments. For more challenging tasks, please refer to [1].
Reference:
[1] Fu, Justin, et al. "D4rl: Datasets for deep data-driven reinforcement learning." arXiv preprint arXiv:2004.07219 (2020).
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: 1. Could you provide the results of offline reinforcement learning methods in the test environments?
2. Can you provide the environment results obtained when applying the proposed approach to more challenging environments?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 2 fair
Limitations: This work is primarily focused on simple environments and does not address more complex scenarios.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the time and effort in reviewing our paper. We will respond to each point in turn:
---
### 1. Property 3.2 is not an Assumption
Property 3.2 is **NOT an assumption**, and is **INDEPENDENT TO SPECIFIC TASKS**. It is about the architecture of the neural network being used (not an assumption imposed on the environment).
We wish to clarify that the belief state in our context is the **values of the last activation layer in neural networks**, hence it is independent of the task. As long as there is a neural network with a linear layer as the output layer that approximates the value function, we are able to use its last activation layer as our belief state.
Therefore, we would like to note that our property 3.2 is not an assumption imposed on the environment, but a property that is satisfied by general neural network approximators, as we have noted in Remark 3.3.
### 2. Eqn. (9) is Well-Defined.
Eqn. (9) depicts the MSE minimization using neural networks. It’s a well-defined problem.
To be specific, we consider a neural network with a linear output layer. We denote the layers before the last as $\mathbf{b}$, and the linear output layer as $\mathbf{l}$. We explicitly write the value function approximation minimization process in Eqn.(9). We would like to note that, different solutions of this equation correspond to different neural network parameters. And any set of parameters that minimized the loss function can be used in practice.
We do not assume the uniqueness of the optimized neural network parameters, and our optimization process following Eqn.(9) will not be affected.
To see this, suppose we have $\mathbf{b}’, \mathbf{l}’ = c^{-1}\mathbf{b}, c\mathbf{l}$, Eqn.(10) and Eqn.(11) will still get the identical results. This is because the calculations following Eqn.(9) only use the relative information among beliefs, rather than their absolute values.
### 3. We added Offline RL Algorithms as Additional Baselines
**The only experiment in which Offline-RL algorithms can be compared with ABC is Section 5.1.** In all other experiments, Offline-RL **does not enjoy the properties we discussed in the paper.** Nor do those Offline-RL algorithms address the issue of accountability in decision-making.
To address the reviewer's concern, we additionally compare ABC with CQL and TD3-BC in Section 5.1. Not surprisingly, we find the performance of the original CQL and TD3-BC on Pendulum-Het are poor. This is because they are not designed for partial observable tasks with high stochasticity.
Therefore, we further implemented an improved version of the TD3-BC with a recurrent context encoding module [cf. Meta-Q-Learning]. In those experiments, we find the learning process of Offline-RL is not stable, leading to a large variance in the policy quality — we have observed a similar problem in our previous MFRL baseline, and reported it with details in Appendix D.7.
_The cumulative reward of each method is reported. **Higher is better.**_
| Task | Low-Data | Mid-Data | Rich-Data |
|-------------------|---------------------|---------------------|---------------------|
| ABC | **-1.39 ± 1.39** | **-1.25 ± 0.40** | **-0.6 ± 0.08** |
| BC | -422.77 ± 409.51 | -225.32 ± 340.83 | -126.74 ± 280.73 |
| TD3 | -4.1 ± 2.76 | -11.95 ± 4.68 | -15.27 ± 6.46 |
| CQL | -793.89 ± 206.0 | -889.85 ± 291.99 | -805.49 ± 578.75 |
| TD3-BC | -844.95 ± 170.93 | -781.8 ± 337.11 | -821.02 ± 587.77 |
| TD3-BC-Recurrent | -82.92 ± 115.79 | -41.92 ± 59.28 | **-0.45 ± 0.57** |
| Data-Avg-Return | -307.81 ± 387.53 | -245.54 ± 338.65 | -208.81 ± 272.84 |
### 4. Difference between ABC and Offline-RL
We would like to note that Offline-RL is related to our work (Appdx. B.1), yet it is not our primary focus. The mentioned D4RL benchmark is famous in Offline-RL, yet it presents a weak link to the main problem of accountability studied in this work. In fact, **the only experiment that Offline-RL algorithms can be compared with ABC is Section 5.1.** In all other experiments, Offline-RL does not enjoy the properties we discussed in the paper.
We wish to clarify that **we mainly studied the accountability of offline decision-making in high-stake systems, and applied the proposed method to real-world healthcare dataset.** This is different from the normal pursuit of Offline-RL that aims at improving conservative value estimation.
To make it more explicit, we emphasized in our abstract, introduction, related work, and experiment section that what makes ABC different from existing literature including Offline-RL is its 5 properties:
- P1: controllable conservation to avoid aggressive extrapolation;
- P2: accountability that provides a decision basis;
- P3: suitability for low-data regimes;
- P4: adaptability to user specifications that allows customization;
- P5: flexibility in strictly batched imitation settings for broader applicability.
Out of those 5 properties, Offline-RL only explicitly satisfies P1.
To address the reviewer's concern about more challenging tasks, we **refer to additional experiments in Appendix F.3, F.4, F.5**, where we demonstrate the scalability of ABC and the potential of combining ABC with black-box policies. From such a perspective, ABC can be used as a post-hoc interpretation mechanism for any given black-box algorithm like CQL, TD3-BC, etc. This could be interesting and important future work but is out of the scope of this methodology paper focusing on accountability, rather than Offline-RL.
---
We hope that these clarifications address your concerns, and we are happy to have further discussions would they remain unclear.
---
Rebuttal 2:
Title: Further Discussions and Feedback Welcome
Comment: We deeply appreciate the insights you've shared during the review process. Following our revisions and previous responses, we are genuinely curious if we have adequately addressed the concerns you raised.
Should there be any leftover questions, concerns, or areas you feel need more clarification, please do not hesitate to let us know. We greatly respect your insights and stand ready to make any additional refinements based on your feedback.
---
Rebuttal 3:
Title: Gentle Reminder: Feedback on Our Submission
Comment: It's been over a week since we shared our response addressing each point from the prior review comments, and we haven't yet had the privilege of receiving your feedback. To ensure we maximize the discussion period, we gently reach out, hoping the reviewer might engage further. We would appreciate it a lot if the reviewer could please let us know if there are any additional concerns or areas that need our clarification.
---
In order to assist the reviewer in recalling the specifics of the work, the previous comments, and our response, we would like to offer a brief summary of each aspect:
## Summary of Our Work
In our work, we study the problem of accountability in batched control tasks. We proposed the Accountable Batched Controller (ABC), which goes beyond previous literature studying offline-RL by having 5 unique properties that are desired in responsibility-sensitive scenarios:
- P1. Conservation;
- P2. Is accountable
- P3. Works with Low-Data;
- P4. Being adaptive to user specification;
- P5. Works in strictly batched imitation setting;
We demonstrated those properties through extensive empirical studies: we verify each of those properties through separate experiments, each of which at least contains two environments with varying set-ups. We highlight the use case of ABC in a real-world healthcare dataset.
Additionally, we provide more qualitative results in the appendix, and extension of ABC to work as a post-hoc plug-in for understanding black-box policies.
## Summary of Comments and Our Responses
In your previous comments, you mainly had the following questions, and we answered those questions through the previous response:
### Point 1
You thought the linear relationship between the belief and value is an assumption, **unfortunately, by mistake.**
### - Our Response
We have pointed out that **this is a misinterpretation**. We **DO NOT** impose any linear assumption on the tasks, and our linear decomposition is independent of the complexity of tasks. Property 3.2 and remark 3.3 highlighted the general applicability of our proposed method: the existence of such linear decomposition is a property of neural value estimators as long as they apply a linear output layer, which is the most common implementation in the field.
### Point 2
You thought the Eq.(9) is not well-defined, **unfortunately, by mistake.**
### - Our Response
We have argued that **this is not true** as Eq.(9) depicts the MSE minimization objective. Please kindly let us reiterate that we do not impose uniqueness constraints on optimizing neural networks (as no one would do so). Any optimizer optimizes Eqn. (9) is enough for our algorithms to work.
### Point 3
You commented we should make a comparison with more offline-RL baselines. **We have added experiments as suggested.**
### - Our Response
We first argued that there are clear differences between the general interests of offline-RL and our work: **ABC pursues the 5 unique properties** desired for responsibility-sensitive decision-making systems, offline-RL mainly focuses on **1 of those 5** aspects — the conservative learning. We acknowledge the importance of research in offline-RL and their challenging environments, yet we believe **both of those topics are important and warrant their individual study to make scientific progress.**
In order to better address your concerns, we have **followed your suggestion to provide additional experiments** by comparing CQL and TD3-BC, two of the most prevailing offline-RL algorithms, as additional baselines in section 5.1. Kindly allow us to emphasize that this section is the only section that offline-RL can be compared to our method. **For all other sections (5.2-5.6), the characteristics we have underscored are unique to our method and not inherent to offline-RL algorithms, making them non-comparable.**
In our additional empirical findings, we demonstrated that **conventional offline-RL algorithms do not serve as well-performing baselines** for the problems we studied in this work. Consequently, we further improved the TD3-BC, enhancing its ability for tasks that necessitate **historical transition memory and competence in addressing stochasticity.** These attributes are paramount, especially in the domains we've focused on, such as healthcare. Our results indicate that offline-RL algorithms tend to prefer larger datasets, as data scarcity could severely impact stability. We would release our advanced TD3-BC implementation with the community, offering yet another valuable resource.
---
We genuinely value your perspective and were wondering if there might be any outstanding questions or concerns we can assist with. Your insights are of utmost importance to us, and we aim to make the most of the available time to address any points you may raise.
---
Rebuttal 4:
Comment: I am deeply appreciative of the author for their response, which effectively addresses several of my concerns.
I have made corresponding adjustments to my score (3 to 5).
However, I really suggest the authors reorganize the presentation of the paper since it is quite confusing to me.
---
Rebuttal Comment 4.1:
Title: Thanks for Sharing Further Suggestions
Comment: We sincerely thank the reviewer for the encouraging feedback and kind consideration in re-evaluating our work.
In response to your suggestions regarding the presentation and to enhance the clarity of the paper, we have made a series of updates to our manuscript. We have detailed these changes in the official comment titled "Follow-Up Author Response on Presentation to Reviewer fhj3 and fQBY".
Our revision mainly includes 1. reorganization of the introduction; 2. a method sketch paragraph before introducing the method; and 3. extended related work that focuses on distinguishing ABC from offline-RL literature. We genuinely hope our revised manuscript could meet your expectations and provide clarity for all readers.
We would appreciate it if you could kindly let us know if there were any further concerns or suggestions on the presentation. In the limited time remaining, we are still eager to do our utmost to address them!
Regards,
Authors | Summary: This work proposes accountable batched control with five desirable properties. The design is motivated by the fact that the reward or feedback of trajectories are hard to obtain in high-stake responsibility-sensitive applications. The minimal hull subset of the decision corpus is constructed for the decomposition of the value function for each candidate action. Then the optimal policy is selected in terms of the weighted value function. This work demonstrates the promise of the five properties on one real-world healthcare dataset and one simulated maze environment.
Strengths: After the rebuttal I tentatively raised my score from 5 to 6.
---
The paper is quite obscure and not easy to understand, making it challenging to grasp the complete understanding of the methodology it presented. However, I would appreciate it that the authors included a video presentation in the appendix and saved my time. In particular, the animation of the algorithm makes it easier to understand.
The introduction of the batched control is a seemingly novel contribution. The experimental setup and analysis are presented in a concrete and well-written manner, albeit deferred to the appendix due to space constraints. In addition, the authors attached an anonymous link to their code implementation.
Weaknesses: While I may not have a comprehensive understanding of the literature, it appears that the focus of this paper leans more towards reinforcement learning rather than the chosen primary area of interpretability and explainability.
I am uncertain about the reasons behind the batched controller possessing the five advantageous properties in comparison to other well-known methods, including Q-learning, model-based RL, and behavior clone. Although I skimmed through the Appendix B, the interpretation is still unclear.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: 1. The term *decision corpus* is never explained. How does it differ from the trajectory in the offline data?
2. The batched controller appears to neglect the historical decisions and fails to account for the temporal correlation among actions. How can you guarantee that the policy will receive the optimal accumulated rewards?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: The authors have include a separate broader impact section in the Appendix.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the time and effort in reviewing our paper. We will respond to each point in turn:
---
### 1. Definition of the Decision Corpus
The definition of _Decision Corpus_ is explained in line 6 in the abstract and line 61 in Sec. 3. In our context, we use _Decision Corpus_ to refer to the offline decision dataset.
We have improved the clarity of the definition and made it more explicit in our revision.
### 2. Historical Decisions are Captured by the Belief State
- **[No temporal correlation among actions]** We would first note our work is developed under the general Partially Observable Markov Decision Process (POMDP) setting, denoted as a tuple $(\mathcal{X}, \omega, \mathcal{O}, \mathcal{A}, \mathcal{T}, \mathcal{R}, \gamma, \rho_0)$. Under the Markovian property, **there should be no temporal correlation among actions.**
- **[Ability to capture historical decisions]** In such a setting, it is important to capture the historical decisions and transitions, therefore, we introduced the observational transition history variable $h_t$, defined as $h_t = (o_{<t}, a_{<t}, r_{<t}) \in \mathcal{H} \subseteq \mathbb{R}^{(d_o+d_a +1)\cdot (t-1)}$. In our work, **the belief mapping $\mathbf{b}$ is a function of observation, action, and historical transition**, i.e., $\mathbf{b} = \mathbf{b}(o_t, a_t, h_t)$, to capture the information in historical observations, decisions and return.
- **[Optimize accumulated rewards]** In order to optimize the correct learning objective of maximizing cumulative episodic return, **we use the cumulative return as $v^c$ in Eqn.(2).**
As a consequence, the estimated value $\hat{v}$ approximates the true cumulative return, and ABC optimizes this cumulative reward proxy in decision-making.
### 3. Extended Discussion Comparing ABC with Existing Algorithms
Below, we discuss each of the desired properties in turn. For each of the properties, we start with introducing the definitions of the property, followed by comparisons among ABC and MFRL (Q-learning), MBRL, and BC.
(1) **Accountability**: the decision-making process is traceable, and the decisions can be supported by concrete examples in the offline dataset.
- ABC: the decisions of ABC are accountable, since they are generated referring to the Corpus Subset within the minimal convex hull. Those examples provide an example-based explanation of the decisions.
- MFRL: In MFRL, a black-box value network and black-box policy network are learned with the offline dataset. There is no decision support for the black-box policies.
- MBRL: In MBRL, a black-box world model optimized with the offline dataset is used as a proxy of the true dynamics, and planning algorithms are then applied to such a black-box model to make decisions. Those decisions are not supported by explicit references.
- BC: In BC, a black-box policy is learned through supervised learning. The output of such a policy is hard to be linked with specific training examples.
(2) **Conservation**: estimations of decision outcomes are interpolated, avoiding aggressive extrapolation that is harmful in offline control.
- ABC: ABC performs conservative decision-making by using decision supports within a minimal convex hull. How such a decomposition in the convex hull improves conservation is justified theoretically by Proposition 3.8 (Estimation Bound) and Proposition 3.10 (Existence and Uniqueness).
- MFRL: In MFRL like CQL and TD3-BC, the conservation is explicitly given as constraints or distribution matching. We would note that conventional MFRL algorithms are not designed for those tasks and suffer from aggressive extrapolation.
- MBRL: Similar to MFRL, the conservation should be added to MBRL through external efforts. Because the conventional design of model-based learning does not address such an issue.
- BC: In BC, the learning objective is to minimize the prediction difference. There is little we can do to aid conservation.
(3) **Low-Data**: whether a method works in the low-data regime.
- ABC: The decision process of ABC only relies on a few examples constituting the minimal convex hull, hence the algorithm performs well under the low-data regime.
- MFRL: In MFRL, the black-box value network and policy network can be designed to be sample-efficient.
- MBRL: In MBRL, sufficient data is always required to learn an accurate world model.
- BC: the performance of BC is highly dependent on the quality of data. It is not designed for the low-data regime.
(4) **Adaptive**: whether the control behavior of a method can be adjusted according to additional constraints as clinical guidelines without modification or re-training.
- ABC: by changing reference examples, i.e., the decision corpus, during test time inference, ABC can seamlessly perform different types of decision-making according to user specifications.
- MFRL: In MFRL, when new data is used, a new value network and policy network need to be re-trained.
- MBRL: In MBRL, the world model construction is independent of the data, hence the decisions can be adaptive by changing a new planning algorithm on top of the world model. No model re-training is needed.
- BC: In BC, a new model needs to be trained with a specified type of decision corpus.
(5) **Reward-Free**: availability of extension to the strictly batched imitation setting where rewards are unavailable.
- ABC: We have shown the key insight of making decisions through belief space similarity can be extended to the settings without reward signals, as discussed in Appendix C.
- MFRL: In MFRL, the Q-values can not be calculated without the reward function.
- MBRL: In MBRL, the planning algorithms do not have a clear objective to optimize without reward signals.
- BC: BC is not be affected by the absence of reward signals, because it does not need the reward to learn its policy.
---
Should there be any additional questions or concerns, we are more than willing to provide further explanations.
---
Rebuttal Comment 1.1:
Title: Raise my score
Comment: I thank the reviewers for their detailed response to all of my questions. I thoroughly read the other reviewers' feedback and the authors' response. In general, I like the neat idea of finding the minimal hull. However, the authors also admitted that their method primarily focuses on high-stake decision making process, instead of the conventional offline reinforcement learning setting. That is why I hesitated to raise the score to 6 rather than 7 or above.
---
Reply to Comment 1.1.1:
Title: Specificity as Asset
Comment: We sincerely thank you for taking the time to review our manuscript and for your thoughtful consideration of the feedback provided by other reviewers, as well as our responses.
We acknowledge and respect your perspective regarding the primary focus of our method on high-stake decision-making scenarios. We believe that this specificity can be an asset and unique contribution, as it addresses a crucial area within the broader domain. While previous works on offline RL have primarily focused on efficient learning with conservation, the accountability of offline decisions remains largely unexplored despite its importance. We believe both of those topics are important and warrant their individual study to make scientific progress.
In critical domains like healthcare, it's vital that decisions are based on supportive evidence. For instance, when a patient is treated in a certain manner, it should be based on the successful outcomes of previous patients with comparable conditions who received the same treatment. The ability to trace the supportive basis of decisions enhances the process of policy reasoning and debugging, thereby improving the trustworthiness of decision-making systems. On the opposite, the supportive evidence is less important in the setting of robotics usually studied by the offline-RL community, which motivates us to select the *interpretability and explainability* rather than reinforcement learning as our primary area.
We believe the clarity of our manuscript is enhanced with revisions addressing your insightful comments. And our novel approach, coupled with the expansive applications of accountability in batched control tasks, holds the potential to make a meaningful contribution to the community.
Once again, we deeply appreciate the attention and thoroughness you have provided throughout the review process. | Summary: The paper presents the Accountable Batched Controller (ABC) based on the example-based explanation framework as a solution for offline control in responsibility-sensitive applications. Through experiments on simulated and real-world tasks, the method shows accountability, conservation, and adaptability.
Strengths: The paper proposed a novel method for accountable batched control with decision corpus, theoretically proved the existence and uniqueness of the decomposition under mild conditions, and conducted solid experiments to verify the effectiveness and desired properties of the method.
The paper is clear-presented and well-organized.
Weaknesses: To further improve, more experiments on real-world control tasks are needed.
Technical Quality: 4 excellent
Clarity: 3 good
Questions for Authors: More in-depth analysis of Table 2 may help gain deeper insight regarding the suitability for low-data regimes.
Why is section 5.1 highlighting P1-P3?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 3 good
Contribution: 3 good
Limitations: As the authors also mentioned, the current accountable batched control method is limited to low-dimensional control tasks and may not perform as well in high-dimensional control systems, limiting its potential contribution.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the time and effort in reviewing our paper. We will respond to each point in turn:
---
### 1. Additional Analysis on Table 2
We would start by providing **more empirical studies** extending the previous Table 2:
_The cumulative reward of each method is reported. Additional experiments are repeated with 5 seeds. **Higher is better.**_
| Task | Low-Data | Mid-Data | Rich-Data |
|-------------------|---------------------|---------------------|---------------------|
| 1NN | -557.07 ± 256.64 | -690.49 ± 152.59 | -512.71 ± 131.2 |
| kNN | -849.45 ± 91.23 | -670.51 ± 321.09 | -645.72 ± 220.33 |
| kNN + Belief | -659.17 ± 219.76 | -525.58 ± 436.56 | -534.02 ± 568.47 |
| ABC w/o Belief | -302.55 ± 426.39 | -173.84 ± 245.85 | -130.24 ± 184.12 |
| ABC | **-1.39 ± 1.39** | **-1.25 ± 0.40** | **-0.6 ± 0.08** |
| BC | -422.77 ± 409.51 | -225.32 ± 340.83 | -126.74 ± 280.73 |
| TD3 | -4.1 ± 2.76 | -11.95 ± 4.68 | -15.27 ± 6.46 |
| CQL | -793.89 ± 206.0 | -889.85 ± 291.99 | -805.49 ± 578.75 |
| TD3-BC | -844.95 ± 170.93 | -781.8 ± 337.11 | -821.02 ± 587.77 |
| TD3-BC-Recurrent | -82.92 ± 115.79 | -41.92 ± 59.28 | **-0.45 ± 0.57** |
| MPC | **-1.5 ± 0.43** | **-1.34 ± 0.15** | -1.41 ± 0.26 |
| Data-Avg-Return | -307.81 ± 387.53 | -245.54 ± 338.65 | -208.81 ± 272.84 |
In this updated Table, we have
(1) included ablation studies (kNN+Belief, ABC w/o Belief)
(2) included additional baselines in offline-RL
Several conclusions can be drawn from the updated table:
- (1) [High-Performance] Compare to all of the methods including black-box algorithms and the accountable baselines, we find** ABC is able to achieve high performance in all settings**.
- (2) [Efficacy under Low-Data Regime] Compare ABC with the baselines, we observe the superiority of ABC especially under the low-data regime. In such a setting, **ABC is able to effectively solve the problem while many other methods suffer from higher instability**.
- (3) [Ablation Studies] In addition to the comparison between ABC and kNN, we **additionally experiment** with kNN that works in the belief space, we find it improves the performance of kNN, demonstrating the **effectiveness of leveraging the belief space** in accountable decision making. Moreover, we demonstrate the **effectiveness of the minimal convex hull decomposition**, which is another algorithmic design, through the experiment of ABC w/o Belief (i.e., kNN with minimal convex hull decomposition). We find the results are better than the original kNN, yet significantly worse than ABC.
- (4) [Instability of Offline-RL Algorithms] Conventional offline-RL algorithms focus only on the conservatism during learning from offline decision corpus in MDP tasks, hence **they fail for efficient learning in our partially observable tasks with high stochasticity**. To make those algorithms stronger, we additionally implemented a recurrent module for the TD3-BC algorithm, and compare it to ABC. We find those offline-RL algorithms **suffer from instability issues during learning and are hard to converge** to a well-performing policy. We provide an additional analysis of their learning process in Appendix D.7.
- (5) [Property of Conservation] We would like to note the property of conservation is demonstrated through the _offline nature_ of those benchmark tasks.
To enhance the clarity of our representation, we would refer to the **added ablation study that could further help demonstrate the importance of conservation**: through the comparison between kNN and ABC w/o Belief (as they both work in the original input space) or kNN+Belief and ABC (as they both work in the belief space), the performance gain of the ABC w/o Belief and ABC over counterparts are actually based on the conservative property introduced by the minimal convex hull.
### 2. More tasks
To address the reviewer's concern on more challenging tasks, we refer to additional experiments in Appendix F.3, F.4, F.5, where we demonstrate the scalability of ABC and the potential of combining ABC with black-box policies. From such a perspective, ABC can be used as a post-hoc interpretation mechanism for any given black-box algorithms
To be specific, we additionally experimented on the LunarLander-Continuous environment and the BipedalWalker environment. In those experiments, we find that the dimensionality of states is not a critical issue, but the increase in the action dimensions can be more challenging — it originates from the uniform sampling over the action space in our Algorithm. To address such a difficulty, we investigate the potential of integrating ABC with black-box samplers in F.4.
We can observe from the results that ABC can work both in isolation or combined with black-box policies. ABC can be used as a plug-in to add accountability to black-box controllers in a post-hoc manner. In high-dimensional control tasks, uniform sampling can be inefficient and black-box samplers can alleviate such a difficulty. This could potentially be a promising direction for future research.
---
Should there be any additional questions or concerns, we are more than willing to provide further explanations.
---
Rebuttal Comment 1.1:
Title: Thank you for the response
Comment: I want to thank the authors for the detailed response, I have considered them, and I maintain my original score.
---
Rebuttal 2:
Title: Further Discussions and Feedback Welcome
Comment: We deeply appreciate the insights you've shared during the review process. Following our revisions and previous responses, we are genuinely curious if we have adequately addressed the concerns you raised.
Should there be any leftover questions, concerns, or areas you feel need more clarification, please do not hesitate to let us know. We greatly respect your insights and stand ready to make any additional refinements based on your feedback. | Rebuttal 1:
Rebuttal: We extend our sincere gratitude to all reviewers for their insightful comments, valuable suggestions, time, and efforts in evaluating and improving our paper.
We thank all reviewers for their affirmation of our work’s **novelty** (reviewers: 9Kjz, izcS), **presentation** (reviewers: fhj3, 9Kjz, izcS, fQBY), **evaluation** (reviewers: fhj3, 9Kjz, izcS, qevo), and **importance** (reviewers: fhj3, fQBY, qevo).
----
To address the concerns raised by reviewers, we would respond to each of their questions respectively. Below, as a general response, we aim to outline the **key revisions and additional experimentation conducted by far**:
#### **Supplementary Experimental Evaluation**
1. **(Table 1 in attached PDF)** We conducted additional ablation studies of
- kNN + Belief (i.e., ABC w/o Minimal Convex Hull Decomposition)
- ABC w/o Belief (i.e., kNN + Minimal Convex Hull Decomposition)
2. **(Table 2 in attached PDF)** We provide additional offline-RL baselines, including
- CQL [1]
- TD3-BC [2]
- TD3-BC-Recurrent that improves TD3-BC using a recurrent module in POMDP tasks [3,4].
3. **(Figure 1 in attached PDF)** We highlight the controllable conservative behavior of ABC by varying the quantile number.
4. **(Table 3 in attached PDF)** We demonstrate the scalability of ABC by providing results on the LunarLanderContinuous environment.
5. **(Table 4 in attached PDF)** We experiment with the BipedalWalker environment to showcase how to integrate ABC with black-box controllers to add accountability.
#### **Revised Manuscript for Clarity**
1. We have revised the terminology used in Proposition 3.10 to eliminate any ambiguity regarding the dimension restriction of the minimal convex hull.
2. We have emphasized that the primary focus of ABC extends beyond the problem of offline RL. While offline RL possesses only the conservation property, ABC has five distinct properties that are all crucial for accountable batched control tasks.
3. In Sec.3.5 and Sec.5.2, we've employed distinct symbols to represent the constant, using $ \epsilon$, and the constant as a function of a given quantile threshold, denoted as $ \epsilon(q) $, to enhance clarity.
4. We explained the 5 desired properties more explicitly in the related work section, illustrating why compared baselines may not satisfy each of those properties.
----
We hope our clarification and additional empirical studies could address the concerns raised by reviewers. Should there be any leftover questions, please let us know and we will make every effort to address them during the subsequent discussion period.
---
**_Refrences_**
[1] Kumar, Aviral, et al. "Conservative q-learning for offline reinforcement learning."
[2] Fujimoto, Scott, and Shixiang Shane Gu. "A minimalist approach to offline reinforcement learning."
[3] Ni, Tianwei, Benjamin Eysenbach, and Ruslan Salakhutdinov. "Recurrent model-free rl can be a strong baseline for many pomdps."
[4] Fakoor, Rasool, et al. "Meta-q-learning."
Pdf: /pdf/38d837b3e034511d679bee0d756bd3d4775f3061.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: I found the paper somewhat hard to read and understand, so here I’ll present a summary that’s quite different from the author’s presentation.
In offline RL, or other settings where there is a performance metric to optimize, we can consider two simple baselines:
1. Nearest neighbors: For each action, find the most similar transition(s) in the dataset, and use those to estimate the value of the action, then take the action estimated to be best.
2. Supervised learning: Train a model to predict the value of each action, and use that to estimate the value of actions.
The advantage of (1) is that we get a notion of explainability (visualize the nearest neighbors that were used to estimate action value), but the disadvantage is that it does not work well (because similarity in the input space may not mean that decision-making will be similar). The advantage of (2) is that it works better, but is less explainable. So the first idea is that we can get the best of both worlds by still using (2) to train a model, but then use the embeddings (i.e. the activations before the final linear layer) as inputs for a nearest neighbors approach.
However, this can still have problems: in particular, for a new test point, the nearest neighbors may all be very tightly clustered but far away from the test point. Ideally, in such a situation, we would find nearby points in a variety of _different_ directions, and average them, so that our estimates are interpolations rather than extrapolations in the embedding space. So, instead of finding the nearest neighbors in our dataset, we find a minimal set of points from the dataset such that the current embedding falls within the convex hull of those points (or, if no such set exists, the embedding is as close as possible to the convex hull). We automatically discard any actions that are far away from the best convex hull, since they are likely OOD. This gives the author’s method: ABC.
(In the paper’s presentation, the embeddings are called “beliefs”.)
The authors test their method in a variety of settings:
1. Heterogeneous Pendulum: Similar to classic Pendulum, except that there is a 50% chance for the action effects to be swapped.
2. Maze: A 2D setting with a wall separating the start and goal states, with two openings in the wall.
3. Ward: Healthcare task, in which the task is to predict whether or not to use an oxygen therapy device.
In heterogeneous Pendulum, the authors show that ABC performs slightly better than model-free RL and model predictive control, and much better than other baselines. They also show the effect of ablating $\epsilon$, the hyperparameter that controls which actions are considered OOD.
In Maze, the authors collect data from a variety of different behavioral policies, which have to be composed together to solve the task, and show that ABC is capable of this. There are two different ways to solve the task, corresponding to the two openings in the wall. The authors show that ABC can show both methods of solving the task, and that when visualizing the points forming the minimal convex hull for the resulting actions, they can be attributed to the behavior policies that used the same hole in the wall to solve the task. They also show that by increasing the proportion of different behavioral policies, you can control which of the two solutions ABC is more likely to use.
On Ward, the authors show that ABC performs on par with behavior cloning (BC) using a multilayer perceptron, and performs better than k-nearest-neighbors and BC using a linear model.
Strengths: 1. Once I understood the idea, I found it simple and intuitive, with a clear story about why it should be helpful.
2. The application of machine learning to healthcare is important, and accountability and conservatism are important properties to ensure in such a setting.
3. There are a variety of experiments demonstrating the claimed properties of the method.
Weaknesses: **Properties of ABC**
The authors list five properties that ABC satisfies. I agree with the author’s points that ABC works better with low data (at least relative to kNN) and that ABC can be used in the reward-free setting (at least for continuous actions spaces). However, I’m not convinced of the other three properties:
1. Conservatism: The authors claim that ABC is conservative, I believe because they filter out actions that have a belief corpus residual that is too high. While I think the authors are probably correct, I don’t think their experiments show it: in all of the experiments that compare against baselines, black-box methods perform about the same as ABC, even though black-box methods are not normally “conservative”.
2. Accountable: The authors claim that ABC is accountable because for any action taken by ABC, we can identify data points in the training dataset that make up the convex hull that determined that particular action, and show those to the user. However, there isn’t even a qualitative evaluation of how useful such explanations are. The closest is Figure 5, which visualizes the belief corpus as points on a 2D grid whose axes are uninterpretable (belief dimensions 1 and 2) relative to the test data point, but looking at that figure I do not feel like I have understood very much about ABC’s decision in that setting.
3. Adaptive: To show that the ABC is adaptive, the authors perform an experiment in which they change the composition of the dataset on which ABC is trained, and show that this affects ABC’s behavior. But by this standard, essentially all algorithms are adaptive, including the baselines they compare against (e.g. behavior cloning, which they say is not adaptive in Table 1). It’s not clear why this is a unique advantage of ABC.
(Incidentally, on accountability, the author’s technique is extremely similar to the technique of presenting maximally activating dataset examples to explain neuron activations, a common technique for explainability in supervised learning.)
**Additional comparisons**
I would like to see the authors compare ABC to the first method in my summary, i.e. training a model to predict value / actions, and then using k-nearest-neighbors on the embeddings (activations before the final linear layer). This can be thought of either as a baseline, or as an ablation (as an ablation, it corresponds to ABC without the convex hull aspects). This would be helpful in understanding the effects of the various design decisions the authors make.
If performing an experiment on accountability, then I’d like to see a comparison to the dataset examples technique applied to the kNN-on-embeddings model discussed above.
**Disagreement with Section 5.2 claim**
Section 5.2 notes that there are two hyperparameters: “the number of uniformly sampled actions and the threshold”. It claims that these can be unified into a single hyperparameter, the _effective action size_. However, the experiment doesn’t support this: it simply sets the number of sampled actions (which we’ll call $n_A$) to 100, and then shows the effect of varying the percentile threshold $\epsilon$. The experiment that should be run would be to use a variety of settings of _both_ hyperparameters, and then check whether runs with similar effective action sizes $\epsilon \times n_A$ have similar performance: if so, then it would be justified to only think about the effective action size. However, my guess is that this will not be the case.
**Minor issue with the theory**
(Note: set notation doesn't seem to be working below)
Proposition 3.10 is false because of the requirement that the convex hull contain $d_b + 1$ examples. For example, suppose $d_b = 2$, $b_t = [1, 0]$, and $\mathcal{D} = \{ [7, 0], [3, 0], [-1, 0] \}$. Note that $b_t = 0.5 * [-1, 0] + 0.5 * [3, 0]$, and so if we have $\mathcal{C} = \mathcal{D}$, then $b_t \in \mathcal{CB(C)}$, and so $r_{\mathcal{C}}(b_t) = 0$ as required by Proposition 3.10. Definition 3.9 requires the minimal corpus subset $\tilde{\mathcal{C}}(b_t)$ to have 3 elements, and which means that $\tilde{\mathcal{C}}(b_t) = \mathcal{D}$. However, the decomposition on the minimal hull is not unique, since we have both $b_t = 0.5 * [-1, 0] + 0.5 * [3, 0]$ as well as $b_t = 0.75 * [-1, 0] + 0.25 * [7, 0]$, contradicting Proposition 3.10.
The issue is that you require the convex hull to contain $d_b + 1$ examples. If you remove that restriction, then in the example above $\tilde{\mathcal{C}}(b_t) = \{ [-1, 0], [3, 0] \}$ and then the decomposition is unique, as desired.
(I believe your current proof would also work if you remove the restriction. Currently, it doesn’t work because you remove an element from $\tilde{\mathcal{C}}(b_t)$ and call that a contradiction, but it is actually not a contradiction because the newly created set no longer has $d_b + 1$ elements.)
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: **Overall view and suggestions for the authors**
I quite like the idea in this paper, but currently I think the evaluation and presentation of the idea are not good enough, and so I am recommending rejection. However, I think there is the seed of a good paper here, and would likely be quite excited about a version of the paper that looked more like:
1. Discussing embeddings as a useful way to get a nice structured representation, perhaps considering kNN-on-embeddings as the baseline.
2. Identifying linear interpolation in the minimal convex hull as an alternative to the kNN decision criterion.
3. Conducting a series of experiments that demonstrate the value of the convex hull idea, focusing particularly on questions like: (a) Are belief corpus residuals better at OOD detection than nearest-neighbor distances? (b) Does linear interpolation in a convex hull lead to better performance than kNN on embeddings? (c) Do the points in the convex hull provide a better explanation of the selected action than the k nearest neighbors in embedding space? I think it is quite plausible that convex hulls do better on all of these metrics, but the current experiments don’t show it.
**Note on confidence**
I’ve selected a confidence of (4) below, but I want to note that I am not very familiar with related literature, and in particular I know very little about accountability in the healthcare setting. As a result, I cannot evaluate (1) the originality of the work (maybe convex hulls have been explored before), and (2) whether the authors compared to state of the art techniques.
**Questions**
I’m interested in responses to any of the weaknesses I listed above, but in addition, I have some questions on specific details:
1. Why do you require that the minimal corpus subset $\tilde{C}(b_t)$ have $d_b + 1$ elements?
2. In Section 3.5, $\epsilon$ appears to be an absolute threshold for the belief corpus residual, but in Section 5.2, it appears to be a percentile for the belief corpus residual. Which is it?
3. In the Maze environment, what is the performance measure for the behavioral policies that make up your dataset? Is it cumulative reward on the final task (i.e. going from (0,0) to (16,0)) or cumulative reward on each individual task (i.e. going from (0,0) to (8, 16), going from (0, 0) to (8, 8), etc), or something else entirely?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 3 good
Limitations: The weaknesses listed above are not present in the paper. Suggestions for improvements are in the previous sections.
The paper applies ABC to discrete settings, and also says that it can work in reward-free settings, but the idea for reward-free settings only works for continuous action spaces, not discrete ones. This is mostly not a big deal since the reward-free setting is not currently a major focus of the paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the time and effort in reviewing our paper. We will respond to each point in turn:
---
### 1. Properties
- **Conservatism**. We demonstrate the property of Conservation through our experiments provided in Appendix F2. We explicitly visualize the behaviors of ABC under different degrees of conservation.
- **Accountability**. ABC is accountable because it provides reference examples in making decisions. The high-level insight follows example-based explanations in XAI. In our work, we put a special focus on sequential decision-making problems rather than prediction tasks in the XAI literature.
- **Adaptivity**. We recognized our previous description of adaptivity could be misleading. In fact, we wish to demonstrate the property of on-the-fly adaptivity of ABC that is correlated to its property of accountability: as it links decisions to reference training data, filtering out those unwanted training data during deployment is a much easier way than training a new model like BC and other algorithms to perform a special type of decision under user specification.
### 2. Ablation studies, and additional baselines
To address the reviewer's concern about empirical evaluation, we implemented two more variants of ABC and kNN as baselines, including
- **kNN+Belief**: it uses the learned latent space for decision-making when applying kNN. This study could be regarded as the ablation study of _ABC w/o Minimal Hull_.
- **ABC w/o Belief**: it uses the original input space for decision-making, but applies the minimal hull decomposition. This study could be regarded as the ablation study of _kNN + Minimal Hull_.
Results are provided in the following extended Table 2:
_The cumulative reward of each method is reported. Additional experiments are repeated with 5 seeds. **Higher is better.**_
| Task | Low-Data | Mid-Data | Rich-Data |
|-------------------|---------------------|---------------------|---------------------|
| 1NN | -557.07 ± 256.64 | -690.49 ± 152.59 | -512.71 ± 131.2 |
| kNN | -849.45 ± 91.23 | -670.51 ± 321.09 | -645.72 ± 220.33 |
| kNN + Belief | -659.17 ± 219.76 | -525.58 ± 436.56 | -534.02 ± 568.47 |
| ABC w/o Belief | -302.55 ± 426.39 | -173.84 ± 245.85 | -130.24 ± 184.12 |
| ABC | **-1.39 ± 1.39** | **-1.25 ± 0.40** | **-0.6 ± 0.08** |
| BC | -422.77 ± 409.51 | -225.32 ± 340.83 | -126.74 ± 280.73 |
| TD3 | -4.1 ± 2.76 | -11.95 ± 4.68 | -15.27 ± 6.46 |
| CQL | -793.89 ± 206.0 | -889.85 ± 291.99 | -805.49 ± 578.75 |
| TD3-BC | -844.95 ± 170.93 | -781.8 ± 337.11 | -821.02 ± 587.77 |
| TD3-BC-Recurrent | -82.92 ± 115.79 | -41.92 ± 59.28 | **-0.45 ± 0.57** |
| MPC | **-1.5 ± 0.43** | **-1.34 ± 0.15** | -1.41 ± 0.26 |
| Data-Avg-Return | -307.81 ± 387.53 | -245.54 ± 338.65 | -208.81 ± 272.84 |
We additionally compare against CQL and TD3-BC in Section 5.1. Not surprisingly, we find the performance of the original CQL and TD3-BC on Pendulum-Het are poor. This is because they are not designed for partial observable tasks with high stochasticity.
Therefore, we further implemented an improved version of the TD3-BC with a recurrent context encoding module [cf. Meta-Q-Learning]. In those experiments, we find the learning process of Offline-RL is not stable, leading to a large variance in the policy quality — we have observed a similar problem in our previous MFRL baseline, and reported it with details in Appendix D.7.
### 3. Hyper-parameters
We would agree with the reviewer that using a smaller sampling size with a larger quantile number is less preferred in comparison with using a larger sampling size with a smaller quantile number, this is because the latter will lead to a more accurate estimation and more conservative behavior.
We have updated our manuscript accordingly.
### 4. Theory
In the counter-example raised by the reviewer, the dimension of the belief space diminished to $1$-dim, instead of 2-dim, for the mentioned specific convex hull decomposition.
The key issue is indeed **the dimension of the belief space should be more clearly defined**. In this example, the minimal hull should not contain three points, because its hyper-volume (i.e., length, in this 2-D example) is not minimized.
We have updated the dimension of the search space from **$d_b+1$** to **maximally $d_b +1$** to enhance clarity.
### 5. Threshold
To make our methodology part clear, we use $\epsilon$ as a constant in Sec. 3.5. In practice, such a constant can be implemented through quantile thresholding (Sec.5.2). We use the same notation to emphasize that this quantile number controls the threshold. We updated the notation in Sec. 5.2 to $\epsilon(q=0.3), \epsilon(q=0.5)$, etc. to enhance the clarity.
### 6. Terminology of _Reward-Free_
In our context, the reward-free indicates the **strictly batched imitation** settings where reward information is not accessible. Different from the normal batched control setting where an offline dataset containing $(o_t, a_t, r_t)$ is available, the reward-free setting can only leverage a dataset that is composed of $(o_t, a_t)$.
---
Should there be any additional questions or concerns, we are more than willing to provide further explanations.
---
Rebuttal Comment 1.1:
Title: Raising my score
Comment: Thanks for the response! It has addressed most of my concerns, and I am raising my score from 3 to 5 (and contribution from 2 to 3).
My main remaining concerns are:
1. While it is true that with the authors’ method it is possible to identify the data points in training that affect the action taken, it is not clear to me how much accountability this provides.
2. I find it hard to square the results of Appendix F2 and the failure of the kNN + embeddings method with the success of black box methods, which makes me think I’m misunderstanding something about the paper.
3. As mentioned in the review, I find the presentation of the paper quite confusing.
---
Reply to Comment 1.1.1:
Title: Follow-up Response
Comment: Thank you for the continued evaluation and the follow-up questions. We hope our explanation below could be helpful in addressing your remaining concerns:
### 1. Accountability through reference examples.
ABC offers a novel example-based approach to interpretable policy learning. As has been shown in [1], **human subjects in fact find example-based explanations more insightful than explanations based on feature importance**, especially for human-machine cooperative tasks.
To see how example-based interpretability (i.e., accountability) can be more helpful and distinguish from feature-based interpretability in decision-making, we will illustrate with an example of cancer treatment involving high-risk options like Radiotherapy and Chemotherapy.
**Conventional Interpretable-RL is for Model Understanding**
Existing interpretability methods in RL, such as feature saliency[2], input importance[3], and converting black-box models into interpretable formats[4], primarily helps users understand how models arrive at decisions. Using these methods, users can understand **how decisions correlate with specific features**. In the context of cancer treatment, such interpretations might attribute decisions to certain biomarker levels. This interpretability facilitates debugging and refining model decisions. For instance, if the policy put focus on some unnecessary or causally unrelated features, doctors and experts can improve the policy learning by removing those inputs[3,5].
We would like to note the focus in such a case is to **debug and improve the model’s decision**.
**Accountability Benefits Human-AI Cooperation**
However, **why** a policy generates the decisions is remain unclear (e.g., what is the decision support?). Even with the above type of interpretability, people may still wonder why a certain biomarker should be able to determine the treatment plan. And knowing there are successful cases when similar patients receive the same treatment plan will be beneficial for those non-experts (e.g., patients) to understand the process. (and importantly, be optimistic about the outcome.)
ABC shifts the focus to understanding why policies decide as they do, especially when the reason is non-obvious. In critical applications, like the aforementioned cancer treatment, understanding the why is crucial. ABC enhances human-AI cooperation by offering reference examples, which are **more intuitive for humans**, aiding them in complex decision-making. ABC achieves this by mapping examples to the belief space that is linearly dependent on the outcome and identifying supporting examples through a minimal convex hull decomposition in such a space.
### 2. Integrating ABC with Black-Box Models
In Appendix F.2, we highlighted that ABC can be used as a post-hoc interpretation module by combining it with any black-box policies in decision-making. This is because of the fact that for any given transition history and action, ABC is able to find the corresponding minimal convex hull decomposition, and therefore find the most representative reference examples for executing such a given action. In high-dimensional tasks where uniform sampling from the action space can be inefficient, leveraging a black-box model as the sampler can achieve a good balance between accountability and performance.
---
**References**
[1] Nguyen, Giang, Daeyoung Kim, and Anh Nguyen. "The effectiveness of feature attribution methods and its correlation with automatic evaluation scores." Advances in Neural Information Processing Systems 34 (2021): 26422-26436.
[2] Mott, Alexander, et al. "Towards interpretable reinforcement learning using attention augmented agents." Advances in neural information processing systems 32 (2019).
[3] Yujin Tang, Duong Nguyen, and David Ha. Neuroevolution of self-interpretable agents. In Proceedings of the 2020 Genetic and Evolutionary Computation Conference, pages 414–424, 2020.
[4] Daniel Hein, Steffen Udluft, and Thomas A Runkler. Interpretable policies for reinforcement learning by genetic programming. Engineering Applications of Artificial Intelligence 158–169, 2018
[5] De Haan, Pim, Dinesh Jayaraman, and Sergey Levine. "Causal confusion in imitation learning." Advances in Neural Information Processing Systems 32 (2019).
---
Thank you again for your consideration and supportive feedback. Should there be any leftover concerns, please let us know and we will do our utmost to address them.
---
Reply to Comment 1.1.2:
Title: Follow-up Response on Presentation
Comment: Dear Reviewer fhj3,
In response to your feedback regarding the presentation, and to incorporate your valuable comments, we have made a series of updates to our manuscript. We have detailed these changes in the official comment titled "Follow-Up Author Response on Presentation to Reviewer fhj3 and fQBY".
We hope the reorganized introduction and the method sketch paragraph inspired by your comments could address your concerns about our presentation. We would appreciate it if you could kindly let us know if there were any further questions. In the limited time remaining, we are still eager to do our utmost to address them!
Regards,
Authors | null | null | null | null | null | null |
Offline Minimax Soft-Q-learning Under Realizability and Partial Coverage | Accept (poster) | Summary: This paper presents algorithms based on soft Q-learning for offline RL with finite-sample guarantees. The guarantees are given for general function approximation hold under weaker requirements such as a (partially) weaker variant of single-policy concentrability and avoids the Bellman-completeness assumption.
Strengths: - The paper proposes algorithms based on soft Q-learning that only requires single-policy concentrability and realizability, without Bellman completeness. While a couple of recent papers such as Zhan et al. 2022 have also accomplished this, the guarantees presented in this work hold under a weaker single-policy coverage requirement and the MQP algorithm avoids the realizability of regularized optimal solutions, unlike the PRO-RL algorithm of Zhan et al. 2022.
- The proposed algorithms appear to be implementable compared to prior works such as Chen and Jiang.
Weaknesses: - The dual variables $l(s,a)$ here seems to be closely connected to importance weights $w(s,a)$. The Bellman consistency term in (4) seems to be similar to the importance weighted regularization considered in Zhu et al. 2023, in which there is a max_w. The main difference is using $\Omega$ here. In that paper, the variable $w(s,a)$ are importance weights, which ensure that the average Bellman error is small under relevant distributions (including the data and optimal distributions). Would you please provide a comparison between the two objectives and comment on the connections between the variable $l(s,a)$ in your paper vs. $w(s,a)$ in their paper? Does the function $\Omega$ contribute to avoiding the Bellman completeness assumption? What is the key factor in your proposed objective or analysis that removes the need for Bellman completeness compared to Zhu et al?
- Convergence rates are slower than 1/sqrt{N} (or involve the gap), which are achieved by some of the recent works such as Zhu et al. 2023.
**References**
Zhu, Hanlin, et al. "Importance Weighted Actor-Critic for Optimal Conservative Offline Reinforcement Learning." arXiv preprint arXiv:2301.12714 (2023)
Technical Quality: 4 excellent
Clarity: 3 good
Questions for Authors: Please refer to the questions in the above segment.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 3 good
Contribution: 3 good
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Would you please provide a comparison between the two objectives and comment on the connections between the variable in your paper vs. in their paper (Zhu et al)? Does the function $\Omega$ contribute to avoiding the Bellman completeness assumption? What is the key factor in your proposed objective or analysis that removes the need for Bellman completeness compared to Zhu et al?**
We appreciate your insightful suggestion. Here, we summarize the comparison, which we will incorporate into a revised draft along with a citation to Zhu et al.
* It appears that Zhu et al. share a similar flavor with Jiang and Huang (2020), to which we compare in Table 1. If we understand correctly, Zhu et al. can be regarded as a computationally efficient version of Jiang and Huang (2020). Essentially, both works, Jiang and Huang (2020) and Zhu et al., require the realizability of Q-functions for all policies; a primary objective in our paper is to avoid such policy-uniform assumptions and have only single-policy assumptions. While, similarly to us, they do not need Bellman completeness. These policy optimization results are built upon the off-policy evaluation (OPE) findings, such as Uehara et al. (2020), which demonstrate that we only need realizability of the weight $w_{\pi}$ and $q_{\pi}$ to estimate the value of a policy $\pi$. Our approach is entirely different in the sense that we directly tackle the policy optimization problem by estimating soft Q-functions or optimal Q-functions without performing policy evaluation as an intermediate step. This stems from our goal of constructing methods that do not demand realizability for any policies $\pi$.
* As a result, the function $w_{\pi}(s,a)$ in Jiang and Huang (2020) and Zhu et al. is very different from our $l_*(s,a)$ (and $l^*_{\alpha}(s,a)$). Using our notation, we can write $w_{\pi}(s,a) = d_{\pi,\mu_0}(s,a) / P_b(s,a)$ where $d_{\pi,\mu_0}$, that is, the discounted occupancy distribution induced by a policy $\pi$ when the initial distribution is $\mu_0 \times \pi$. On the other hand, $l_*$ is equal to $d_{\pi^*,P_b}(s,a) / P_b(s,a)$ where $P_b$ in $d_{\pi^*,P_b}(s,a)$ is the initial distribution. Here, the crucial difference is the initial distributions.
* The reason we do not need Bellman completeness is because we leverage the characterization of saddle points in the convex optimization problem in equation (4) as we explain in Section 4. When our goal is to estimate the value of a policy $\pi$ (offline policy evaluation), we can similarly take an approach and ensure the statistical guarantee under realizability of $q_{\pi}$ and $w_{\pi}$ by considering the following optimization problem:
$$ \min_q E_{(s,a)\sim \mu_0\times \pi}[q(s,a)] $$
such that
$$ E[\gamma q(s',\pi)+r-q(s,a)|s,a]=0.$$
Then, the saddle point of the Lagrangian version of the above optimization problem is $(q_{\pi},w_{\pi})$ (Q-functions and weights functions for a policy $\pi$). This implies that we can estimate the value of a policy $\pi$ under realizability of $(q_{\pi},w_{\pi})$ without Bellman completeness. Indeed, this leads to the proposal in Jiang and Huang (2020). However, to achieve our goal (removing realizability for any policy $\pi$), we need to devise a different optimization problem as in our equations (2)-(4). Furthermore, even from a technical point of view, compared to the above optimization problem and the one in Zhan et al. (2022), our formulation is interesting since the above formulation is a linear convex optimization problem; while, our formulation is nonlinear nonconvex and equivalent to a nonlinear _convex_ optimization problem obtained from relaxing the equality constraints in equation (3) to inequalities (see Lemma 5 in Appendix B).
Uehara, Masatoshi, Jiawei Huang, and Nan Jiang. "Minimax weight and q-function learning for off-policy evaluation." International Conference on Machine Learning. PMLR, 2020. | Summary: This paper proposes a q-learning based algorithm for solving offline minimax problem in reinforcement learning. Under certain partial coverage and realizability assumption for the regularized problem, they prove $O(1/\epsilon^4)$ sample complexity.
Strengths: This is the first paper for minimax problem in offline RL that only requires realizability assumption and partial data coverage (without completeness and all-policy concentrability).
Weaknesses: Weakness and comments:
1. The partial data coverage and realizability assumptions are made for the regularized problem instead of the original problem. It is unclear what the relationship between these assumptions with the standard partial coverage and realizability assumptions for the original problem as in ``Revisiting the linear-programming framework for offline rl with general function approximation''.
2. The technique seems to be similar to paper ``Refined value-based offline rl under realizability and partial coverage
''. The authors should explain more about the novelty.
3. References missing: ``Revisiting the linear-programming framework for offline rl with general function approximation'' also proposes an algorithm which requires $1/\epsilon^2$ sample complexity without completeness assumption for offline RL.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: None
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: None
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Comparison with ``Revisiting the linear-programming framework for offline rl with general function approximation'' by Ozdaglar et al.**
Thank you for bringing this paper to our attention. We will certainly cite and discuss it. Here is the comparison, which we will incorporate into the main text. The primary difference lies in our focus on assuming _only_ mild single-policy realizability assumptions, and in particular avoiding Bellman completeness assumptions (e.g., $\mathcal B^\pi \mathcal Q\subset \mathcal Q~\forall \pi\in\Pi$ as in the second line of Table 1). There are two cases considered in Ozdaglar et al. Their Case I (their Sec 3) involves completeness, therefore involving fundamentally different primitive assumptions. Their Case II (their Sec 4) avoids completeness but leverages a gap (hard margin), like Chen and Jiang (2022; see 5th line in our Table 1). In comparison, we avoid completeness and/or uniform-over-policy assumptions in the regularized soft-Q setting, and in a more similar setting where we similarly focus on the _non-regularized_ problem, using our MQP algorithm we only need to use a soft margin condition, which we argue is much more likely to hold than a gau / hard margin, especially in settings with continuous state spaces where a hard margin must imply discontinuous jumps to avoid zero. Aside from these differences, there are of course important similarities in that we both focus on single-policy concentrability. Our results offer the further opportunity to leverage a _refined_ concentrability coefficient that adapts to the $\mathcal Q$ class (see Definition 1 and discussion thereafter). We will add all of this discussion and the citation to Ozdaglar et al. to the paper. | Summary: This paper proposed value-based algorithms for offline RL without Bellman completeness and full coverage of data support. By proposing the optimization objective of selecting the soft-Q from the set, where element in set satisfies the empirical soft-Bellman operator and the selected Q should minimize the square value under the expectation of behavior data distribution, and transforming it into a minimax problem, MSQP and MSQ could provide the property of convergence and sample complexity guarantee only with partial coverage and realizability.
Strengths: 1. This paper propose novel offline algorithms without the assumption of full coverage and Bellman completeness.
2. The proposed algorithms are in a value-based manner, and avoid the assumption of initial distribution of ground truth MDP.
3. The proposed algorithms could have the property of convergence and sample complexity guarantee under more related assumptions.
4. This paper is well written and easy to follow.
Weaknesses: 1. There lack some intuitions for the proposed learning objective in Equation 2. and Equation 3., since the square value of Q is not an explicit constraint for offline learning problem.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: 1. Could the author explain more about the intuitions of the proposed learning objective in Equation 2, though the convergence and sample complexity guarantee is provided?
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations: It could be better if empirical results beyond tabular are provided.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Could the author explain more about the intuitions of the proposed learning objective in Equation 2, though the convergence and sample complexity guarantee is provided?**
This is an excellent point. We will certainly provide a more detailed explanation in the revised draft. Let us offer an intuitive explanation. There are two primary reasons why in equation (2) we use the objective $\mathbb E[q^2(s,a)]$: it is strictly monotonic and it is strongly convex. Here is why these properties are important:
* This choice of a strictly monotonic objective makes it so that the optimization problem in equation (2), with the feasible set in equation (3), is equivalent to a relaxed optimization problem in equation (10) in Appendix B where the equality constraints are replaced by inequalities, as we prove in Lemma 5 of Appendix B. Here any objective that is strictly monotonic in $q$ would have worked (i.e., $q\leq q'$ and $q\neq q'$ (mod $P_b$) implies that $q$ has a smaller objective than $q'$).
* The result is important because the constraints in equation (3) are not convex as they involve an equality of a max over optimization variables (note the lack of convexity is regarding each constraint for each $s,a$; the feasible set in equation (3) is a singleton which is trivially convex). When we relax this to an inequality, the constraints becomes convex and easy to deal with analytically.
* Once we have these convex constraints, using the objective $\mathbb E[q^2(s,a)]$ gives a convex optimization problem. Crucially, the objective we use is _strongly_ convex which helps ensure that we obtain a _unique_ saddle point when considering the Lagrangian version of the problem. Here any strongly convex objective would have worked.
* Although this point is briefly mentioned in Line 208, we acknowledge that the paper would benefit from a clearer motivation upfront when first introducing equation (2), mentioning both strict monotonicity and strong convexity. We will add this. | Summary: This submission studies offline RL with function approximation. Under the relatively mild assumption of partial coverage and realizability of the function approximations, the paper proposes the algorithm M(S)QP that approximates the (soft-)Q-function by solving a minimax optimization problem, and establishes guarantees of $L^2$-convergence of the Q-function estimator and the sub-optimality gap of the learned policy.
Strengths: A novelty of this work is that it studies offline RL based on soft-Q-function, while previous works mostly focus on the standard Q-function or the LP formulation. There are also several technical improvements compared to prior works, including $L^2$-convergence of the estimators and slightly milder technical assumptions.
Weaknesses: (1) The proposed schemes are based on minimax optimization and hence are mainly of theoretical interest, and the dual function $l$ is hard to interpret.
(2) The bounds on PAC sample complexity are either worse than the results in related literatures, or require an extra margin condition.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: (1) In Remark 1, the authors remark that the minimax optimization is computationally feasible when $\mathcal{L}$ is chosen to be RKHS or linear function classes. I would like to know if there is a concrete class of offline RL problems where such a choice of $\mathcal{L}$ satisfies realizability, or otherwise this remark is vacuous.
(2) The bounds presented in this paper seem to be far from optimal (e.g. $O(n^{-1/4})$ convergence or $O(\epsilon^{-8})$ rate), and they are also worse than the results in previous works (e.g. the case $\alpha=0$ requires an extra assumption). I wonder if this is an artifact of the analysis or a price of milder assumptions.
(3) Line 303: "In the following, we demonstrate that MQP, which is a special version of MSQP when $\alpha \rightarrow 0$, can achieve a faster rate of $O\left(1 / \epsilon^2\right)$."
In my opinion, this remark is slightly misleading, because the faster $O\left(1 / \epsilon^2\right)$ rate is achieved under a stronger assumption of "soft margin". For example, in tabular offline RL with gap $\Delta>0$ and single policy concentrability $C^*$, the minimax optimal sample complexity is of order $\frac{SC^*}{\Delta\epsilon}$ (Wang et al., ignoring the horizon factor), which is better than the gap-independent sample complexity $\frac{SC^*}{\epsilon^2}$ as $\epsilon\to 0$.
Xinqi Wang, Qiwen Cui, and Simon S. Du. "On Gap-dependent Bounds for Offline Reinforcement Learning."
(4) Writing:
(a) The notation $\left\|d\_{\pi\_\alpha^{\star}, P\_b} / P\_b\right\|\_{\infty}$ is ambiguous, because the measure $d\_{\pi\_\alpha^{\star}, P\_b}$ is defined on both the space $\mathcal{S}$ and the space $\mathcal{S}\times\mathcal{A}$, and so does the measure $P\_b$.
(b) Line 187 & 188 (typos): $d\_{\pi\_{\alpha^*, P\_b}}$ -> $d\_{\pi\_\alpha^*, P\_b} $
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **(1) In Remark 1, the authors remark that the minimax optimization is computationally feasible when $L$ is chosen to be RKHS or linear function classes. I would like to know if there is a concrete class of offline RL problems where such a choice of $L$ satisfies realizability, or otherwise this remark is vacuous.**
A. Thank you for the question. The purpose of Remark 1, as we will certainly make more explicit in the text, is to emphasize the practicality of our algorithm, rather than to suggest any theoretical result. Notably, this choice for $\mathcal L$ allows us to employ standard empirical risk minimization techniques without needing solve a minimax problem. Since the implementability of our algorithm may be of interest to the reader we do not think the remark is vacuous, but its non-theoretical purpose should be clarified. Please let us know if you agree.
While we do not believe it is relevant to Remark 1 as explained above, we do believe we also understand your question and can offer a response to it. As we understand, you are inquiring whether the realizability condition for $\mathcal L$ (our Assumption 4 or 6, for the case of soft-$Q$ or $Q^*$, respectively) can be shown to be satisfied for certain choices of $\mathcal L$ (e.g., linear functions) when the MDP is assumed to belong to a certain MDP model classes that have been recently studied in RL theory, such as linear MDPs. This is an open question, which we will state and suggest that it deserves future attention. Nonetheless, the perspective we take is that, as compared to RL theory work that starts with such model assumptions, our work is line with literature looking at model-free RL based on function approximation, where abstract function realizability are themselves the primitive assumptions about the MDP instance, rather than a model assumption. For the very particular case of realizability of weight functions, this is line with the works of Jiang and Huang (2020) and Zhan et al. (2022) which similarly require such weight-function realizability (see Table 1).
**(2) $O(1/\epsilon^8)$ I wonder if this is an artifact of the analysis or a price of milder assumptions.**
A. This is a very interesting point. We are of the opinion that this could be attributed to the price of milder assumptions as we strive to attain the refined rate. However, we have no lower bound to establish this formally at the moment. We will note this interesting question and the possible avenue for future work on lower bounds.
**(3) $\alpha \to 0$, the standard fast rate would be $O(1/\epsilon)$**
A. Thank you for your valuable insight. We concur with your observation. To address this, we will include a remark and reference mentioning the standard fast rates with the gap being $O(1/\epsilon)$.
Regarding the two latter points, we will add discussion regarding the rates and highlight that their optimality is in question, our primary focus on getting _some_ polynomial convergence under very mild assumptions, and that related work with similarly mild assumptions also has unappealing rates (e.g., Zhan et al., 2022, who like us focus on _some_ convergence).
**Notation $|d_{\pi^\ast_\alpha,P_b}/P_b|_{\infty}$ is ambiguous**
Thank you for bringing this to our attention. We concur with your observation, and we will introduce subscripts $\mathcal S$ and $\mathcal S\times\mathcal A$ to disambiguate.
---
Rebuttal Comment 1.1:
Comment: Thanks for the detailed response, which clarifies all my concerns. | null | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: This paper studies offline reinforcement learning and proposes a soft Q-learning algorithm based on reformulating the solution to the Bellman equation as a saddle point of a certain minimax optimization problem. The authors then provide PAC guarantees for estimating the entropy-regularized Q-function, which in turn gives the sample complexity bounds. A key feature of this work is that it requires weaker assumptions (only partial coverage assumption is required) compared with existing literature.
Strengths: Theoretical analysis of Q-learning with function approximation is one of the key topics in reinforcement learning (RL). This paper provides a sample complexity bound of soft Q-learning under only a partial coverage assumption, which might be of interest to the broad community of RL.
Weaknesses: I have two major concerns about this work.
(1) I am confused about the reformulation of solving the Bellman equation as a minimax optimization problem. It is well-known that both the vanilla Bellman operator and the entropy regularized Bellman operator are contraction mappings, which imply the uniqueness of both $q^*$ and $q_\alpha^*$. In that case, it is unclear what Eq. (2) means since there is really no optimization problem here. Am I missing anything?
(2) I read the proof of Theorem 5 and have some questions. In the second inequality in line 637, the event $q^*(s,a')-q^*(s,\pi^*(s))\leq 0$ happens surely because the optimal policy is greedy with respect to the optimal Q-function. In that case, why do we need to add this event to the indicator? In addition, since $q^*(s,a')-q^*(s,\pi^*(s))\leq 0$ happens surely, the first term on the right-hand side of the first inequality in line 638 is exactly equal to $1$. As a result, the bound seems to be meaningless and Assumption 7 is redundant. What am I missing here?
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: See the weakness section.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 3 good
Contribution: 2 fair
Limitations: No negative societal impact.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for reviewing our draft!
**Q.** I am confused about the reformulation of solving the Bellman equation as a minimax optimization problem...
**A.** As you rightly pointed out, the solution to the Bellman equation (3) is unique, giving the soft Q-functions $q^*_\alpha$, and therefore the feasible set of the optimization problem (2) is a singleton, but the Lagrangification of this optimization problem in equation (4) which is yet another reformulation of $q^*_\alpha$ is instructive for developing our algorithm. When dealing with finite samples, however, a challenge is to construct a loss function to target learning the soft Q-functions directly, since the soft Bellman equation (3) is not known exactly and the state-action space is infinite. Therefore, the point of the optimization problem in equations (2) and (4) is to give an equivalent formulation of $q^*_\alpha$ that then inspires our estimation algorithm: take our equation (4) and replace expectations with sample averages and restrict functions $q$ and $l$ to some function classes, as noted in Lines 151-152. The reformulation in equation (2) is trivial and works for any choice of objective function since the feasible set is the singleton $\{q^*_\alpha\}$, but the reformulation in equation (4) is useful because the optimization is now unrestricted and the resulting algorithm (once we replace expectations with sample averages and restrict to function classes) does very heavily depend on the choice of objective in equation (2) and that choice is key to making our results work (we could likely employ any strongly convex function, but the square is easiest to deal with; we utilize the strong convexity in Sec 4 to establish statistical convergence as we mention in Line 212). We will make this intent clearer up front in Line 146 when first introducing the optimization, rather than explaining it only after.
In summary, the optimization problem in equation (2), albeit trivial as an optimization over a singleton, serves as motivation for how our construction of the loss function in equation (5) of our Algorithm 1, which of course can only use finite samples (offline data) rather than population distributions. We will clarify the point of equation (2) up front upon revising. If you have any follow up questions regarding our explanation, we would be delighted to discuss and clarify further.
**Q.** I read the proof of Theorem 5 and have some questions...
**A.** Thank you for the question and thank you for reviewing our proofs. There is indeed a small typo in the proof, but the proof is otherwise correct and the result does hold and Assumption 7 does play a crucial role. The issue is that optimal actions should have been excluded in the sum in the last inequality in line 637. That is, in all the sums over $a'$ in this proof, we should only sum over $a'$ that are sub-optimal at $s$.
Namely, the event you mention $q^*(s,a')-q^*(s,\pi^*(s))\leq 0$ should have in fact been $q^*(s,a')-q^*(s,\pi^*(s))< 0$, meaning in words the event that $a'$ is a suboptimal action at state $s$. Indeed, the final bound in line 637 should very simply read, in words, the indicator that $\pi^*$ and $\hat\pi$ disagree (at $s$) is bounded by the number of actions that appear optimal under $\hat q$ but are in fact suboptimal under $q^*$ (at $s$).
The argument then proceeds in line 638 by splitting the range of $\hat q(s,a')-\hat q(s,\pi^*(s)$ in this indicator from $(-\infty,0)$ into the mutually exclusive $(-\infty,-t)$ and $[-t,0)$ and then, after obtaining two indicators, getting a bound by omitting certain restrictions in the first indicator and others restrictions in the second, resulting in the two terms on the right-hand side of line 638.
The term you mention (the first term on the right-hand side of the first inequality in line 638) should in fact have had a $0>$ in front of $q^*$ so that the indicator there reads $I(0>q^*(s,a')-q^*(s,\pi^*(s))\geq -t)$ (currently, the "$0>$" in front is missing). Note that the event $0>q^*(s,a')-q^*(s,\pi^*(s))\geq -t$ can be read as, $a'$ is suboptimal and $q^*(s,a')-q^*(s,\pi^*(s))\geq -t$ holds. That is, another way to fix the term you mention (the first term on the right-hand side of the first inequality in line 638) is to restrict the sum to actions $a'$ that are sub-optimal at $s$. Since $t>0$ and $a'$ is suboptimal, the latter is non-trivial and does _not_ hold surely. In particular, for $t=0$ the event is surely false and as $t\to0$ the probability goes to 0 by continuity of probability. We bound the (non-trivial) probability of this event in line 639 using Assumption 7, which plays a very crucial role. We then choose $t$ to balance the two terms in line 641 to obtain the final bound.
Luckily, the proof is otherwise correctly written as though the the missing "$0>$" were in fact there in line 639. So everything is correct except for the fact that we are missing this "$0>$", which we will of course add so the sums over $a'$ are restricted to suboptimal actions. We will further add some in-words explanations for the each line in the derivation along the lines of the above explanation. Thanks again for helping us locate this typo.
---
Rebuttal Comment 1.1:
Title: Acknowledgment of Rebuttal
Comment: Thank the authors for the detailed response.
Q1: I am confused about the reformulation of solving the Bellman equation as a minimax optimization problem...
I see the point now. In the next version, please make it clear the reason for introducing the optimization problem.
Q2: I read the proof of Theorem 5 and have some questions...
I checked the proof again and have the following comments:
(1) In the rebuttal, the authors state that "The argument then proceeds in line 638 by splitting the range of $\hat{q}(s,a')-\hat{q}(s,\pi^*(s))$ in this indicator from $(-\infty, 0)$ into the mutually exclusive $(-\infty, -t]$
and $[-t,0]$..." I believe the events are split using $q^*(s,a')-q^*(s,\pi^*(s))$ instead of $\hat{q}(s,a')-\hat{q}(s,\pi^*(s))$.
(2) How was the second inequality obtained from the first inequality in Line 641?
(3) In general, I feel the appendix lacks quality, which is unfortunate for theoretical work. Besides the problem pointed out in my review, many steps in the proof are missing, which makes it hard for the reader to follow. Also, some equations are too long, which is a minor issue.
---
Reply to Comment 1.1.1:
Comment: Thank you very much for your prompt response.
**Q2**
1) Indeed, you are correct. We apologize for the typo.
2) Here is the detailed derivation for Line 641. We will expand it to make sure it's evidently clear.
* In the second inequality of Line 641, we optimize the term in the second line in Line 641 with respect to $t$.
* Set $\nu=2(E_{s \sim d, a\sim \pi_b^{\diamond}(s)} [|\hat q(s,a)-q^\star(s,a)|^2 ]+E_{s \sim d, a\sim \pi^\star(s)}[|\hat q(s,a)-q^\star(s,a)|^2 ])$ so that the second equation line on line 641 reads $|\mathcal A|(c(t/t_0)^\beta+2\nu/t^2)$. This holds for any $t$.
* Set $t=t_0^{\beta/(2+\beta)}\nu^{1/(2+\beta)}$ so that $(t/t_0)^\beta=\nu/t^2$.
* Obtain the bound $2c|\mathcal A|t_0^{-2\beta/(2+\beta}\nu^{\beta/(2+\beta)}$.
* Note that $2^{\beta/(2+\beta)}\leq 2$ and $(x+y)^{\beta/(2+\beta)}\leq x^{\beta/(2+\beta)}+y^{\beta/(2+\beta)}$ for any $x,y>0$ since $\beta/(2+\beta)\in(0,1)$.
* Using this we obtain the bound $4c|\mathcal A|t_0^{-2\beta/(2+\beta)}(E_{s \sim d, a\sim \pi_b^{\diamond}(s)} [|\hat q(s,a)-q^*(s,a)|^2 ]^{\beta/(2+\beta)}+E_{s \sim d, a\sim \pi^*(s)}[|\hat q(s,a)-q^*(s,a)|^2 ]^{\beta/(2+\beta)})$.
* In the proof we ignore the constant factor of 4 since we do not care about universal constants -- we wrap that into the ``Poly" term in the expression in Theorem 5. But when revising we will make the 4 factor clear in the proof so it's easy to read and follow.
3) Thank you for the suggestion. We will go through the appendix and expand the arguments into more steps and add short in-words explanations to make it easier to follow the proofs. | null | null | null | null | null | null |
Inverse Dynamics Pretraining Learns Good Representations for Multitask Imitation | Accept (poster) | Summary: This paper provides both empirical evidence and theoretical arguments for the usefulness of the inverse dynamics criteria as a pre-training objective for multitask imitation learning setups. Furthermore, the authors were able to go beyond the final conclusion and provide insightful analysis where they were able to show that the advantages of the inverse dynamics objective compared to alternative objectives such as behavioral cloning are rooted in better out-of-distribution generalization and robustness to environments with fully latent context.
Strengths: 1. The authors support their claims with both compelling empirical evidence and novel theoretical arguments.
2. The paper is clearly written and accessible to readers with varying backgrounds.
Weaknesses: The superior performance of the inverse dynamics criteria in the examined setting is somewhat expected. Namely, unlike behavioral cloning, the latent multitask performance does not hurt it since the inverse dynamics criteria is oblivious to the latent reward. And the examined architecture seems more natural for inverse dynamics than for forward dynamics. Arguably, forward dynamics could benefit from an architecture that can fuse the observation with the action earlier.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Can you elaborate on how your work relates to recent works on multitask imitation learning via generative pre-training on trajectories? It seem that the generative pre-training objective in these works implicitly combines the forward dynamics and behavioral cloning criteria.
[1] Kuang-Huei Lee, Ofir Nachum, Mengjiao Yang, Lisa Lee, Daniel Freeman, Winnie Xu, Sergio Guadarrama, Ian Fischer, Eric Jang, Henryk Michalewski, and Igor Mordatch. Multi-Game Decision Transformers, 2022.
[2] Scott Reed, Konrad Zolna, Emilio Parisotto, Sergio Gomez Colmenarejo, Alexander Novikov, Gabriel Barth-Maron, Mai Gimenez, Yury Sulsky, Jackie Kay, Jost Tobias Springenberg, Tom Eccles, Jake Bruce, Ali Razavi, Ashley Edwards, Nicolas Heess, Yutian Chen, Raia Hadsell, Oriol Vinyals, Mahyar Bordbar, and Nando de Freitas. A Generalist Agent, 2022.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: In its current state, the theoretical arguments lack formal claims and proofs. Thus they feel more like mathematical intuitions rather than solid theoretical results.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal:
First, we would like to thank the reviewer for their positive comments about the compelling evidence, novel analysis, and clear writing in the paper.
Here we will address each of the weaknesses and the questions raised by the reviewer. Hopefully these comments can provide some additional clarity. If they do, we encourage the reviewer to increase their score, and otherwise are happy to answer any follow up questions.
1. (Somewhat expected results) We agree with the reviewer that the results can be explained well in hindsight, and indeed this is what our analysis section intends to do. However, we disagree that these results are obvious or expected since they are not found in the existing literature (as detailed in Section 2 and Appendix A). As a side remark, we were not initially expecting to see a significant advantage of inverse dynamics ourselves at the beginning of the project. If the reviewer could point to prior work in the literature that made these observations, we would be happy to consider it, but we do not think this is a valid weakness of the work.
2. (Architectural details) We agree with the reviewer that it is difficult to make comparisons between algorithms that require different architectures. However, we disagree that our choices were in any way unreasonable. We attempted to maximize performance of all baselines with a consistent compute budget across methods. We control for architectural differences as much as possible by using the exact same image encoder architecture and feature fusion method for all networks. To compare inverse dynamics and implicit forward dynamics explicitly, both networks consist of an MLP on top of the concatenation of two encoded vectors: $ [\phi(o), \phi(o')] $ for inverse dynamics and $ [\phi(o), \phi_a(a)]$ for forward dynamics. Then the MLP head has the capacity to fuse the features. Moreover, since the image needs to be encoded with a convnet, this seems to be the standard way to incorporate action information since it cannot be fed directly into the convnet. Note for explicit forward dynamics we need to add a decoder on top of encoded features to produce an image to compute the loss. The objective requires this larger network, but we think we handled the difference in the most carefully controlled way possible. If the reviewer can point to related literature or more specific arguments about why our architectural decisions did not make sense, we would be happy to discuss further.
3. (Question - generative pretraining) Thanks for this interesting question. Indeed, it does seem that generative pretraining could be thought of as a combination of BC and FD. It is not clear why creating such a combination would resolve the issues of either method in our particular setting, but this raises an interesting point that all of the objectives we consider can be combined in various ways (e.g. added together). Anecdotally, we did try adding together various objectives during development for this project but found the mixtures to perform worse than the simple objectives. In the end, in this paper we want to focus on presenting the cleanest controlled results that we can, so we opt to exclude these combinations for now, but it is definitely an important direction for future work to consider whether cleverly combining objectives could outperform the individual ones. We will add the papers that the reviewer referenced to the related work section as well as an expanded discussion of this direction of future work.
Also, as a note, we agree with the reviewer that the analysis section is not formal theory (as in theorems). It also only considers a substantially simplified model. We include it for intuition (which we believe the reviewer agrees is useful, based on the strengths section of the review). That said we will discuss how the lack of formal theorem statements is a limitation in a new limitations subsection to be included with the discussion at the end of the paper, and could present a fruitful direction for future theoretical work.
---
Rebuttal Comment 1.1:
Comment: Thank you for the response. I do not have any major concerns. However, I have decided to keep my original score. This is because the theoretical arguments lack formal claims and proof. Additionally, the architecture for forward dynamics is quite different from that used in successful generative pre-training. | Summary: This paper analysis the downstream fine-tuned performance on various multi-task settings after pretaining with different objectives including -
1) Inverse dynamics
2) Contrastive
3) Forward Dynamics - Explicit
4) Forward Dynamics - Implicit
5) Behaviour Cloning
They compare the performance to a model trained from scratch using image representation and augmentations (Pixels + Aug). They also compare to R3M and imagenet pretrained models as baselines. The main conclusion of this paper is that IDM works better than other pretraining objectives and also works better than models trained from scratch in small data regimes. They experimentally and theoretically show that IDM can recover the representation of the true ground truth state while other methods such as Forward dynamics and Behavior Cloning can fail to do so.
Strengths: The paper presents a comprehensive evaluation of different pretraining objectives, including a comparison to models trained from scratch using image representation and augmentations (Pixels + Aug), as well as R3M and imagenet pretrained models as baselines. The main conclusion drawn from this study is that IDM (Input-Dependent Memory) outperforms other pretraining objectives, as well as models trained from scratch, especially in scenarios with limited data availability. The strengths of this paper lie in its valuable insights and experiments that not only demonstrate the superior performance of IDM but also provide a clear understanding of why it works through various experimental analyses. These findings contribute to a better understanding of pretraining objectives and their effectiveness in recovering the representation of the true ground truth state, highlighting the limitations of alternative methods such as Forward dynamics and Behavior Cloning, which fail to achieve comparable results.
Weaknesses: I think the paper lacks certain details and contexts in various places. I am not sure what is the objective used while training the policy in the finetuning phase. It seems to be that it is behavior cloning but I am not sure. Another confusion I have is the inverse dynamics objective is specified using a mean-squared error loss so it seems that the actions are continuous valued but they evaluate action predictions using binary cross-entropy loss, so are action values constrained between 0 and 1. It would be useful for readers if such details are clarified.
From my understanding, the approach relies on expert trajectories to obtain pretraining data which seems like a limitation to me since such data may not be available in many cases. Many approaches like decision transformer (https://arxiv.org/abs/2106.01345) train using suboptimal data. It would be nice to have an analysis in the paper which uses suboptimal data and shows how it affects the different pretraining objectives.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: - It is not clear to me what context variables mean in the context of experiments. Could you clarify with examples what would be a context variable in some datasets considered in the paper?
- There is some evidence in literature in papers such as SGI and SPR that contrastive learning is useful for RL. Although their finetuning setup is different from this paper as they do not use imitation learning for finetuning. Do the authors have any intuition as to why contrastive learning helps there but not here?
SGI - https://arxiv.org/abs/2106.04799
SPR - https://arxiv.org/abs/2007.05929
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Yes, the authors have addressed limitations.
If the authors use expert data for training as I pointed above, then I think they should mention that as a limitation as well as I think it might be difficult to obtain such data in many environments.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal:
First, we would like to thank the reviewer for their comments about the strengths of the paper, namely the "comprehensive evaluation" and resulting "valuable insights".
Here we will address each of the weaknesses and questions raised by the reviewer. Hopefully these comments can provide some additional clarity. If they do, we encourage the reviewer to increase their score, and otherwise are happy to answer any follow up questions.
1. (Setup details) Thanks for raising these issues to our attention. Indeed, we use behavior cloning during the finetuning phase, we will clarify this more explicitly in Equation (5). For inverse dynamics, indeed we also use mean squared error. We are not sure where the reviewer sees binary cross-entropy loss in the paper since it is never used or mentioned. Actions are always continuous and always trained using mean squared error. Perhaps the confusion arises since we report task success as a percentage (valued between 0 and 1), but this is really the average reward not a measure of action prediction accuracy (where the reward is 1 for success and 0 for failure). We will clarify this in the paper.
2. (Suboptimal data) We agree that extending our results to cases with suboptimal data would be an interesting direction for future work and explicitly say so briefly in the discussion in lines 361-362. However, algorithms like decision transformer that rely on access to task rewards are beyond the scope of our paper where we instead focus on the imitation setting where there is no reward information available. We made this choice so that we could create carefully controlled experiments to just test the different representation learning algorithms without the additional complexities of reward-based learning. We will add a broader discussion of this decision in the discussion section as a great direction for future work.
3. (Question - context variables) Thanks for this question, the context variables are detailed in Appendix C, but we realize that we likely did not provide enough clarity in the main text. For example, in the pointmass task the context determines the 2d continuous goal location, in the kitchen task the context determines which of 24 possible sequences of subtasks is the desired behavior (e.g. open the microwave then put the kettle on then turn on the light then open the cabinet is an example of one specific context), and in the metaworld tasks the context determines which of 50 behaviors is desired (e.g. close the box is one context and lock the door is a different context). We will move some of these examples from the appendix into the main text to increase clarity.
4. (Question - contrastive learning) Thanks for bringing these related works to our attention. There are substantial differences between our setup and the one in SGI or SPR beyond just imitation vs. RL (although that is indeed a major difference) that could lead to different results for contrastive learning. SPR is a single-task online method which is very different in nature from our offline multitask setup that explicitly considers the ability of representations to transfer to novel contexts (i.e. tasks). SGI is more similar since they also consider offline pretraining, however instead of considering a setting where information is transferred across tasks from multitask expert pretraining data, they consider single task learning from non-expert policies by collecting data from suboptimal (or random) policies that are attempting to solve the same task. We hypothesize that this substantial difference in dataset composition and gap between pretraining and finetuning tasks is likely the reason that they find contrastive learning to be more useful. It is interesting to note that even in this substantially different setting, the SGI paper finds inverse dynamics modeling alone to be more useful that contrastive learning alone in an ablation (Table 3), so maybe our analysis could be extended to the setting they consider in SGI in future work. We can add a discussion of these paper to the extended related work in Appendix A.
The biggest difference is that those papers do single task learning with auxiliary objectives, while we do multitask learning. Explicitly, SGI and SPR consider an online RL problem where there is only one task (i.e. one context) which determines the reward function. They do not attempt to re-use representations across tasks (i.e. for different reward functions). On the other hand, we focus on a multitask imitation setup where we are given a fixed dataset of offline experience from many different contexts (i.e. tasks) and learn one representation .
Also, as a note, we will explicitly add some of the above discussion about suboptimal data to an explicit limitations subsection included with the discussion at the end of the paper (currently limitations are only discussed in passing, not in one central place as the reviewer noted). | Summary: The paper studies how to effectively pretraining representations in imitation learning where we have acess to a pretraining dataset
with multi-task demonstrations with an unobserved latent context variable for each task and limited amount of finetuning demonstrations
for transferring to novel context. The goal in this setup is to learn good low dimensional representations of high dimensional input
(e.g. visual) to enable transfer to nove context for finetuning.
The paper claims, inverse dynamics modeling is a well-suited objective for imitation learning pretraining setting and provide
empirical evidence supportng the claim. In addition, the paper also derives a theoretical analysis using simple
general environmental model.
Strengths: 1. The paper presents an extensive set of experiments for comparing different methods of pretraining policy representations using
multiple training methods like Inverse Dynamics, Behavior Cloning, Forward dynamics (implicit and explicit), training policy from scratch
, using pretrained visual representations and contrastive pretraining.
2. The results are presented on a diverse suite of tasks which include simpler environments like PointMass to slightly more difficult
environments like MetaWorld or Kitchen environments.
3. The paper evaluates, how well these representations perform with respect to dataset size used for pretraining and finetuning,
how well policies learnt from these mthods generalize to in and out of distribution evaluation tasks. Authors find that ID pretraining leads
to better performing representations on average across benchmark. The paper also presents a nice analysis on how well the representations
learned can predict state and actions. Interestingly ID performs quite well at predict all of this info vs any other method.
4. Finally, the paper also presents a theoretical analysis to support the empirical results using a simplified model
5. The paper doesn't present any novel method but presents interesting and valuable analysis that is worth sharing with the community
Weaknesses: 1. It is not clear why pretrained robotic representations like R3M performs so poorly on almost all of the tasks. The R3M paper
also has results on MetaWorld, Franka Kitchen tasks and it seems like the results for kitchen environment with R3M are quite
poor based on figures 11 & 12 from appendix. Can authors please clarify if the experiments use the same setup as R3M paper?
Also, it might good to provide further insights into why this is the case.
2. The evaluation of these methods is shown on different simple simulated environments but it might also be interesting to compare
these results on more photorealistic simulation environments like Habitat or AI2Thor for tasks like manipulation.
3. The performance of training from scratch baseline seems to almost always be equal to performance of policies trained using ID
pretrained representations as we scale the training dataset size. This seems a bit concerning as pretraining also requires large
amounts of data for any of these methods. Do authors have a specific set of tasks where they observe sample efficiency? i.e.
ID pretrained representations converge to similar performance early on during finetuning? It'd be good to outline the gains achieved
apart from task performance to quantify importance of pretrained representations.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: The paper doesn't present any novel method but presents interesting and valuable analysis that is worth sharing with the community.
As outlined in weakness section if authors can discuss about the mentioned questions.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Scaling these methods to much more complex high dimensional and long-horizon tasks might not be trivial given the compute
requirement.
No novelty in contributing a new method but the paper presents interesting analysis
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: First, we would like to thank the reviewer for their thoughtful and detailed comments about the strengths of the paper on both the experimental and analytical/theoretical fronts.
Here we will address each of the weaknesses raised by the reviewer in turn. Hopefully these comments can provide some additional clarity. If they do, we encourage the reviewer to increase their score, and otherwise are happy to answer any follow up questions.
1. (R3M results) Thanks for raising this subtle issue. Indeed, there are several low-level but important differences between our evaluation setup and the one used in the R3M paper. For the kitchen tasks in particular, the biggest difference is that while the R3M paper considers only learning single subtasks (e.g. slide the door open, see section 4.2 of the R3M paper), we consider learning *sequences of subtasks* (e.g. open the microwave, put the kettle on, turn on the light, *and* slide the door open, all in one trajectory). The R3M paper considers explicitly easier tasks. We did this because the kitchen data itself contains sequences of subtasks, not single subtasks (following the paper that introduced the kitchen dataset). For the metworld tasks, there is a similar pattern where R3M chose to evaluate on particularly easy tasks (this is why we consider two different splits of metaworld on with the R3M eval tasks and one with the original eval tasks from the metaworld paper). Another difference is that to focus solely on feature learning, we only pass in the image observation and not the proprioception while R3M passes in both. Again this makes the problem a little bit more difficult. Finally, we also render images at a lower resolution due to computational constraints. When applying R3M we resize the images, but they will be more pixelated than the ones R3M was originally evaluated on. All of these differences likely contribute to the results that we report. We will add a discussion of these differences to the appendix in the paper. As one last point, it is important to note that R3M is attempting to solve a different problem of general image representation learning that transfers across domains, while we are focusing on within domain, but cross-task generalization (which is easier to analyze in a controlled way).
2. (Photorealistic environments) We agree that evaluation in more challenging environments would be an interesting extension of our work (as we briefly state in the discussion in lines 360-361). However, we would argue that the tasks that we consider are not toy, and do provide sufficient complexity for interesting results. The main goal of this paper was to create carefully controlled and targeted experiments in simple domains to get clear insight, and we think we have accomplished this. Moreover, we prioritized domains with large numbers of predefined tasks and datasets with a single morphology that had been used in related work. We are aware of Habitat and AI2-Thor, but have not seen similar suites of tasks with demonstration data as exists for the environments that we chose (and as a result, none of the related work uses these environments either). That said, we definitely agree that future work to scale up these insights by building better environments and datasets and trying things on real robots is indeed a good idea. We will add a broader discussion of this issue to the paper.
3. (Data scaling) As the reviewer notes, once we approach the limit of large finetuning data, the gap between pretraining and training from scratch disappears. We want to emphasize that this is totally expected behavior for testing transfer learning with any pretraining algorithm. As long as the training from scratch algorithm is sound, it will approach optimal performance if given enough data. The interesting case for pretraining (and the one that we focus on in our main experiments, e.g. in Figure 1), is when we only have small amounts of finetuning data. Figure 1 shows that in this regime, there is indeed a benefit to pretraining. But as the reviewer notes, we should indeed be careful to point out that these gains explicitly depend on the relative amounts of pretraining and finetuning data that are available as shown explicitly by sweeping these parameters in Figure 3. We will make this more clear in the paper.
Also, as a note, we will explicitly add some of the above discussion about more challenging domains to an explicit limitations subsection included with the discussion at the end of the paper (currently limitations are only discussed in passing, not in one centralized place).
---
Rebuttal Comment 1.1:
Comment: Thanks for responding to my questions. I think my concerns have been addressed in general. I'll keep my score unchanged to reflect this. | Summary: This paper investigates the effectiveness of several popular representation learning techniques in the context of imitation learning. While there is no novel components in this paper, the main strength of this paper is that it does well in designing the experimental setups to investigate the effect of several components that might disrupt the message of experiments. And in conclusion, the paper shows that inverse dynamics modelling objective is the best objective among the ones considered in the paper.
Strengths: - Crystal-clear writing that clearly explains the setup and the results in a well-structured manner.
- Extensive experimental results that show inverse dynamics modelling can be better than other objectives in a controlled way.
Weaknesses: - While the results are quite conclusive in the considered setup of imitation learning with clean expert demonstrations, it might not hold in a setup where the dataset is suboptimal so that it's difficult to learn the intention of the agent for making $a_{t}$ even with the access to consecutive two observations $o_{t}$ and $o_{t+1}$. Though I understand that this is not the main goal of this paper, having more discussion on this front, along with the discussion on the paper that discusses the sufficiency of representation learning for control [1], can be useful and strengthen the paper.
- Current trend might not be conclusive as the considered domains are still very simple. Including some additional experiments on more challenging domains like RLBench [2] and RoboSuite [3], which are considered more real-world-ish than simple Kitchen or Meta-World domains, or even on real-world robotics domains (I understand this might not be available in most cases) can make the conclusion of this paper be much more convincing.
- Investigation into the effect of data configuration can be more helpful for understanding when the inverse dynamics modelling can be effective. For instance, how does the trend change when the pre-training and fine-tuning domains are more simpler domains?
- This is very minor, but including the resolution of the result figures and making them be vectorized figures (by exporting them as pdfs) in the draft can be helpful for the clarity of the paper. Also there's a typo in line 278 (the the).
[1] Rakelly, Kate, et al. "Which Mutual-Information Representation Learning Objectives are Sufficient for Control?." Advances in Neural Information Processing Systems 34 (2021): 26345-26357.
[2] James, Stephen, et al. "Rlbench: The robot learning benchmark & learning environment." IEEE Robotics and Automation Letters 5.2 (2020): 3019-3026.
[3] Zhu, Yuke, et al. "robosuite: A modular simulation framework and benchmark for robot learning." arXiv preprint arXiv:2009.12293 (2020).
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: - Please address my concerns & questions in Weaknesses.
- Having a recurrent architecture for BC or stacking some frames might resolve the issue of BC for inferring the latent variable. Did you consider these baselines?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: I couldn't find the discussion on the limitation of this paper in the submitted main draft.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: First, we'd like to thank the reviewer for their positive comments about the clarity of the paper and comprehensiveness of our controlled experiments.
Here we will address each of the weaknesses and one additional question raised by the reviewer in turn. Hopefully these comments can provide some additional clarity. If they do, we encourage the reviewer to increase their score, and otherwise are happy to answer any follow up questions.
1. (Discussion of suboptimal data) We agree with the reviewer that the results do not necessarily extend to cases of suboptimal data (as we state briefly in the discussion in lines 361-362). The main reason for this choice is to keep everything self-contained in an imitation framework. Once data is suboptimal, then it makes more sense to look towards methods like offline RL which were beyond the scope of our study. We would hope that the insights from the imitation case would transfer to RL settings, but this is left to future work. We will add a broader discussion of this issue to the paper.
2. (More challenging domains) Again we agree with the reviewer that using more challenging or real-world domains would be a nice extension of the paper (as we briefly state in the discussion in lines 360-361). However, we would argue that the tasks that we consider are not toy and do provide sufficient complexity for interesting results. The main goal of this paper was to create carefully controlled and targeted experiments in diverse domains to get clear insight, and we think we have accomplished this. Moreover, we prioritized domains with large numbers of predefined tasks and datasets with a single morphology. We decided that robosuite did not have sufficiently many tasks and chose metaworld over RLBench because it had been used more frequently in related work (like R3M) and depends on mujoco rather than coppeliasim which was not supported on our compute infrastructure. That said, we definitely agree that future work to scale up these insights by building better environments and datasets and trying things on real robots is indeed a good idea. We will add a broader discussion of this issue to the paper.
3. We are not sure what the reviewer meant by "how does the trend change when the pre-training and fine-tuning domains are more simpler domains?" Any further clarification would be much appreciated.
4. (Typo and resolution) Thanks for pointing out these issues, we will fix them in the paper.
5. (Question -- frame stacking) Thanks for raising this interesting question. We did try adding frame stacking at one phase of experimentation and found that it did not help, so we did not scale it up to the full experiments. If the reviewer thinks this is a crucial issue, we can attempt to go back and run these experiments across the full suite. From the theoretical side, we do not think that adding recurrence or frame stacking will resolve the dependence issue in general. Looking at the graphical model in Figure 1(a), note that even if we condition on $ o_1 $ and $ o_2$, we do not break the connection between $ c $ and $ a_2$. However, it is possible that in certain environments the history could be sufficient to uniquely determine $ c $ and thus break the dependence. That said, just because $ c $ could be inferred does not mean that we will learn the desired features which are importantly independent of $ c$. It seems that we may instead learn features that depend explicitly on $ c $ and thus would struggle to generalize to new contexts.
Also, as a note, we will explicitly add some of the above discussion about suboptimal data and more challenging domains to an explicit limitations subsection included with the discussion at the end of the paper (currently limitations are only discussed in passing, not in one central place as the reviewer noted).
---
Rebuttal Comment 1.1:
Comment: Thank you for the response. What I meant for simpler domains were more easier ones as in [Chen et al., 2022], but I think it's not crucial. Similarly, I don't think frame stacking experiments are crucial. I have no major concerns and decided to maintain the score, considering that investigation on more challenging domains and suboptimal data are left for a future work.
[Chen et al., 2022] Chen, Xin, Sam Toyer, Cody Wild, Scott Emmons, Ian Fischer, Kuang-Huei Lee, Neel Alex et al. "An empirical investigation of representation learning for imitation." arXiv preprint arXiv:2205.07886 (2022). | null | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
On permutation symmetries in Bayesian neural network posteriors: a variational perspective | Accept (poster) | Summary: This paper considers mode-connectivity in the context of Bayesian neural networks. It shows roughly what we'd expect, based on the results of Entezari et al. (2022).
Strengths: It shows roughly what we'd expect, based on the results of Entezari et al. (2022).
Weaknesses: That's my main issue with the paper. Bayesian neural networks really aren't that different from neural networks. So we would definitely expect the results of Entenzari et al. (2022) to apply in the BNN context. I can't see any interesting contributions on top of that, so I can't recommend acceptance. I would recommend submission to a more specialised venue (e.g. AABI or UAI).
Other points:
* An important reference for permutations: Aitchison, Laurence, Adam Yang, and Sebastian W. Ober. "Deep kernel processes." International Conference on Machine Learning. PMLR, 2021.
* Legend for Fig. 8 is _way_ too small. In general the Figures feel a bit crammed in.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: N/A
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate the reviewer's feedback, but we respectfully disagree with their assessment of our paper.
Below is our best attempt at addressing the reviewers' concerns:
* **The paper shows expected results**: The reviewer expresses their main issue with the paper, stating that Bayesian neural networks are not significantly different from neural networks, and therefore, the results are expected based on Entezari et al. (2022). However, while Bayesian neural networks share some similarities with neural networks, their probabilistic nature and approximate inference techniques introduce complexities not present in standard neural networks. As such, zero-barrier connectivity in the context of Bayesian neural networks is a non-trivial extension and a novel contribution to the field. Our work demonstrates that connectivity can indeed be achieved with approximate Bayesian neural networks, offering insights into their behavior and potential applications in uncertainty quantification.
* **Lack of interesting contributions**: This point is very much linked to the previous one. The reviewer states that they cannot see any interesting contributions beyond the prior work. We strongly disagree with this assessment. Our paper introduces the notion of linear connectivity in Bayesian neural networks, providing a formalism as well a novel perspective on understanding their posterior landscapes. Additionally, we propose a tractable algorithm based on the Linear Assignment Problem to connect approximate solutions efficiently. These contributions offer valuable insights and open up new avenues for exploration in Bayesian neural networks.
* **Recommendation for Specialized Venues**: The reviewer suggests that our work would be more suitable for a specialized venue, such as AABI or UAI. While we appreciate the suggestion, we believe that our work addresses an important question in the field of deep learning and Bayesian neural networks, making it relevant to a broader audience. Mode connectivity is a topic of significant interest, and demonstrating its applicability in the context of BNNs, along with the proposed algorithm, contributes to the understanding of optimization landscapes and uncertainty modeling in deep learning. We respectfully maintain that our paper is well-suited for consideration in the current venue.
For the remaining points:
* **Additional Reference**: We appreciate the reference provided by the reviewer on permutations. We will carefully consider it and include it in our revised manuscript.
* **Figures and Presentation**: See the general comment. In the revised version, we will ensure that the figures are appropriately sized and presented for better clarity and readability. | Summary: The authors conjecture that after accounting for permutation symmetries in overparametrized neural networks that lead to same functional behavior, the low-loss solutions are linearly connected. In the context of Bayesian neural networks with variational inference, the authors use this conjecture to propose a simple matching algorithm based on Linear Assignment Problem, to find an equivalent variational distribution but attains linear connectivity with another solution. The experiments demonstrate that it is possible to find such linearly connected no-barrier regions.
Strengths: - The authors take the ideas of mode connectivity in the deep learning literature, and try to find an equivalent formulation at the level of distributions over parameters as in Bayesian neural networks. It is a very interesting concept.
- The concept becomes even more effective by a successful demonstration of an algorithm that finds such a low-loss barrier. However, I do want to say that the existence is not surprising given that the deep learning literature already demonstrates the existence and the ELBO is just optimizing for "another" marginalized likelihood instead of an un-marginalized likelihood typically used in DL.
Weaknesses: - I believe that the title of the paper should qualify variational BNNs instead of just BNNs. Approximate VI remains practically distinctive in properties from other approximate Bayesian inference methods like MCMC/HMC. It also helps contextualize the scope for the reader, and something that the authors readily align with in the discussion as well.
- Definition 3 about barrier loss seems to be connected to $\mathcal{L}$ instead of $\mathcal{L}_{\mathrm{ELBO}}$. We are interested in the loss computed by the marginalization of the model parameters under the approximate posterior. The authors choose this to be computed on the test data, which seems like jumping ahead to information that should not be used by the modeler. See also Question 1. Figure 3 shows such an algorithm does not benefit train landscape at all. Or did I misunderstand?
- In Line 159, the authors claim to argue that Wasserstein distance is better. Is the argument that covariance information is lost? I think the readers would appreciate a tiny note on using KL-divergence in the appendix and some results showing failure or becoming non-informative. See also Question 3.
- A key missing feature in the experiments is that the identification of low barrier region is not followed by the construction of posterior predictive distribution to compute the generalization error. Only likelihoods are reported, and I am wondering if the benefits of Bayesian model averaging shine as well as or better than reported in earlier works. Can the authors report those numbers? At least a basic SGD, VI, and the Aligned VI proposed in this work.
- The design choice of skipping data augmentation severely restricts the applicability of these results, especially with vision problems where data augmentation is still commonly used. I do, however, empathize with the authors on this one and do not consider this to be a big limitation to discount the contributions, since the rest of the community suffers with this limitation too.
### Minor
- Please use `\citet` instead of `\citep` when referring directly to papers. For instance, in Line 134.
- A very small description of the original LAP problem (introduced in Linear 167) in the appendix would be much appreciated.
Technical Quality: 4 excellent
Clarity: 3 good
Questions for Authors: 1. The posterior predictive likelihood computed on test points in Eq. (5) is also used to define the functional loss barrier in Definition 3. Don't we want the loss barrier to be defined on training points?
- Is this simply a loose usage of the definition, or did the authors specifically imply using test points to construct barrier loss? It looks like from Figure 3 that all kinds of points are used.
- I think it is completely fine to check using the test points as a diagnostic for correlation between behavior at test and train time.
- But then, the matching doesn't seem to impact train landscape at all. Can the authors comment on this, since if I understand this correctly, this only remains a diagnostic method and not a method to actually generate samples from the posterior.
2. Did the authors try interpolating with a convex mixture of two distributions? There's no need to report results for this, but generally curious if such an interpolation provided some reasonable results since the authors only claim that this choice is trivial and non-informative.
3. What would using a KL-divergence instead of Eq. (16) do in practice? Would the distances be always too large to meaningfully distinguish for the LAP problem?
4. Have the authors considered accounting for such functional symmetries during the training itself, instead of a post-hoc matching procedure?
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 4 excellent
Presentation: 3 good
Contribution: 3 good
Limitations: Yes. Also see, weaknesses and questions.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate the detailed review and valuable feedback provided by the reviewer. We have carefully considered each point raised and addressed them below.
* **Scope of the claim**: The reviewer suggests that the title should qualify "variational BNNs" instead of "BNNs" to better contextualize the scope of the paper. Given that this point was raised by Reviewer 2rWU as well, we will be changing Conjecture 1 to better reflect the focus on the variational approximation.
* **Definition of Barrier Loss and Use of Test Points**: The reviewer correctly points out that Definition 3, which defines the barrier, is connected to the posterior predictive likelihood computed on test points, which may seem to incorporate information not available at train time. We acknowledge this concern and apologize for any confusion. In practice, the barrier loss can be computed on both training and testing points. Indeed, all plots report both train and test barriers. We will clarify this in the revised manuscript (line 102 will be changed from "$\{x_\star, y_\star\}$ are respectively the test point and its corresponding label" to "$\{x_\star, y_\star\}$ are respectively the input point under evaluation and its corresponding label". We also agree with the reviewer's observation that the matching procedure may not significantly impact the train landscape. The primary aim of our method is to establish linear connectivity between approximate Bayesian solutions rather than improving the train landscape.
* **Use of Wasserstein Distance**: The reviewer requests further clarification on why we argue that the Wasserstein distance is better than using KL-divergence. For this we refer the reviewer to the Appendix, where we show that the KL-divergence simply reduces to a distance between means, which disregards any information regarding covariances. In the appendix, we also show a simple motivating example of such failure.
* **Data augmentation**: Skipping data augmentation in our training setup was purely a choice of convenience. We know that with data augmentation we need to be careful once moving to Bayesian inference for a multiple of reasons (e.g., cold posterior effect [92], re-weighting of the likelihood due to the increase of the effective sample size [68]). For this reason, we didn't want to "pollute" the results with spurious effects coming from DA, rather than the phenomenon under analysis. Nonetheless, we want also to emphasize that nowhere during the development of the method we make an assumption on DA, which makes our method still applicable in both cases. During this rebuttal week, we were able to run some comparisons with data augmentation for the ResNet20 models. Figure 8 in the rebuttal PDF summarizes this experiment: we see that in both cases (with and without DA), we are still able to recover similar low barrier solutions in terms of likelihood (and accuracy---not reported for space reasons) when following our proposal to align the distributions. As discussed in the general comments, we are planning tp add an additional paragraph in Sec. 5 to better comment on the effect of DA.
* **Benefits of BMA**: In terms of generalization on accuracy, indeed BMA is beneficial w.r.t. point estimates. See the table below, where we report the accuracy of the two models, as well as the interpolated one $\tau=0.5$.
**ResNet/CIFAR10 (Accuracy)**
| | Model 0 | Model interpolated | Model 1 |
| ---------------|---------|--------------------|---------|
| VI | 0.8580 | 0.1025 | 0.8556 |
| **VI aligned** | 0.8580 | 0.7413 | 0.8556 |
| SGD | 0.8558 | 0.1432 | 0.8516 |
**MLP/CIFAR10 (Accuracy)**
| | Model 0 | Model interpolated | Model 1 |
| ---------------|---------|--------------------|---------|
| VI | 0.5718 | 0.2589 | 0.5734 |
| **VI aligned** | 0.5718 | 0.5647 | 0.5734 |
| SGD | 0.5546 | 0.2500 | 0.5545 |
These numbers are in line with previous work in the literature.
We also report the reference comparison with the likelihood below
**ResNet/CIFAR10 (Likelihood)**
| | Model 0 | Model interpolated | Model 1 |
| ---------------| ----------|--------------------| ----------|
| VI | -0.417142 | -2.32982 | -0.427943 |
| **VI aligned** | -0.417142 | -0.71424 | -0.427599 |
| SGD | -0.702219 | -2.43758 | -0.731933 |
* **Mixture of distributions**: as we discussed for Reviewer 2rWU, we argue that a mixture is not informative for studying the geometry of the posterior. To visualize this argument, please check Figure 1 in the rebuttal PDF. With mixtures we are essentially re-weighting the two (good) solutions, without "transport" of distribution mass between the two extremes. As we said in the general comment, this means that if we look at the barrier, we don't see any not because they don't exist, but simply because we are interpolating in such a way that prevents us from exploring the geometry of the posterior.
* **KL divergence**: Thanks for the question. In the appendix we actually show how using the KL divergence in practice falls back to a distance between means (without accounting for the actual shape of the distributions). We also have a simple visualization where this fails in practice, i.e. the LAP with the KL objective fails to recover a clear symmetry.
* **Accounting for Functional Symmetries during Training**: While the idea of accounting for functional symmetries during training is intriguing, it presents a challenging optimization problem. We have not explored this approach in the current paper, but it could be an interesting direction for future research. We will mention this possibility in the discussion section to encourage further investigations. | Summary: The authors extend the recent idea of linear mode connectivity up to permutation symmetry to the setting of Bayesian neural networks. They demonstrate that two different variational approximations to the Bayes posterior enjoy mode connectivity along the Wasserstein geodesic of one distribution, and a suitably permuted version of the other. Such a permutation is discovered by replacing the L2 distance in previous work with the more suitable Wasserstein distance, and similarly, the objective can then be relaxed to a layer-wise linear assignment problem, leading to a tractable coordinate descent algorithm. The authors verify their results numerically, showing that two approximate solutions can indeed be connected for modern networks such as ResNet20 on CIFAR10.
Strengths: 1. Studying mode connectivity for approximate Bayesian inference is a natural follow-up question to previous work, while at the same time requiring non-trivial extensions such as the Wasserstein geodesic and Wasserstein distance. It is very nice and somewhat surprising that the resulting objective (which is seemingly involved) can be relaxed in a very similar spirit, leading to a tractable problem.
2. The experimental setup is quite carefully created and a lot of ablations for different parameters such as the prior variance, the temperature and the width of the network are performed, giving a very complete picture.
Weaknesses: 1. The proposed algorithm and the setup seem to heavily rely on the specific approximation method, namely variational inference with a Gaussian distribution of diagonal covariance as the variational family. Even if the diagonal covariance assumption is relaxed, it is not obvious to me how one can guarantee tractability, let alone moving to multi-modal approximations such as MCMC where not even the Wasserstein geodesic or distance are known in closed form. This is somewhat unsatisfying, since the power of BNNs and the Bayesian posterior in general only really starts to unfold once multiple modes are leveraged. It is thus somewhat unclear how much a unimodal approximation to the Bayes posterior really captures and how much it is really different from a simple point estimate.
2. In general, the problem seems to get less and less interesting, the more precise the approximation to the Bayes posterior becomes. This is simply because the Bayes posterior would incorporate all possible modes (given the prior gives them some mass) and hence there is no other posterior to align with. This is different from the SGD setup, where multiple modes always remain a problem, precisely due to the point-wise nature of the algorithm. The authors are very careful in the main text and always refer to approximate Bayesian inference, but I think it would be helpful to clarify this discrepancy.
3. I understand why the authors use the log-likelihood as a metric to evaluate connectivity, given that it is a proper scoring rule, but it is also very difficult to interpret how meaningful a decrease in likelihood is for practice. This is in contrast to test accuracy, where we have a better understanding of the scale. It would be helpful if the authors could provide the same plots for test accuracy instead of log-likelihood. That way it would also be easier to assess how meaningful the barrier in Fig. 3 for ResNet20 actually is. It would also be helpful to reproduce the same plots as in Fig. 3 without adding the non-aligned connectivity score since this massively increases the scale. That way it is actually difficult to tell how connected the aligned solutions are. They are obviously way more connected than the baseline, which is nice but it would be better to see it in more detail.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. Is data augmentation employed for the experiments involving the cold posterior effect? Data augmentation has been observed to be the main driver of the CPE [1, 2, 3] and hence using it would probably lead to a more visible effect. It has also been recently justified that tempering is a principled way to use data augmentation in Bayesian frameworks [4], so it would not affect the validity of the approximation.
2. Are there gains in terms of test accuracy of the (tempered) approximate Bayesian posteriors versus a standard SGD baseline?
[1] What are Bayesian Neural Network Posteriors Really Like?,
Izmailov et al.
[2] Data augmentation in Bayesian neural networks and the cold posterior effect,
Nabarro et al.
[3] Disentangling the Roles of Curation, Data-Augmentation and the Prior in the Cold Posterior Effect,
Noci et al.
[4] How Tempering Fixes Data Augmentation in Bayesian Neural Networks,
Bachmann et al.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: The authors have addressed the limitations of their work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their constructive feedback and insightful comments.
* **Tractability of Approximation Methods**: The reviewer raises a valid concern regarding the reliance of our proposed algorithm on specific approximation methods, such as variational inference with a Gaussian distribution of diagonal covariance. We agree that the power of Bayesian neural networks lies in capturing multiple modes, and unimodal approximations may limit their expressiveness. While our method is tractable for the chosen approximation, we acknowledge that extending it to more complex multi-modal approximations like MCMC could be challenging but not intractable (e.g. the Wasserstein geodesics can be approximated with the Sinkhorn algorithm [17 in paper]).
* **Precision of Approximations and Interpretability**: The reviewer correctly notes that as the approximation to the Bayes posterior becomes more precise, the problem of connectivity becomes less interesting, as the Bayes posterior would naturally incorporate all possible modes. While we agree with the reviewer, we also need to acknowledge that despite the best efforts from the community, the BNN posterior is still elusive: even carefully tuned MCMC methods cannot fully characterize the full extent of its nature. And they still needs orders of magnitude more of gradient evaluations. We believe that this characterization of permutation symmetries will have value for SG-MCMC methods as well.
* **Comparison with SGD**: Regarding the comparison with SGD, it's still up to debate whether SGD solutions are actual global minimizers (the same Ainsworth's paper shows that we can find better solutions than the ones found by pure SGD, supporting the hypothesis that SGD solutions are not global). In this regard, we can view SGD as an approximation method of the exact minimization problem $\min_\theta\ell(\theta)$, for some loss function $\ell(\cdot)$, in the same way in which we can view approximate Bayesian inference (including sampling) as an optimization problem over space of probability measures w.r.t. the true posterior (although it's clear that the nature of the symmetries are not equivalent for the two setting).
* **Evaluation Metric**: We appreciate the reviewer's suggestion regarding using test accuracy as an additional metric for evaluating mode connectivity. Test accuracy is indeed more interpretable in practice, and we agree that including it in the evaluation would provide valuable insights. In response to this suggestion, we will incorporate plots of test accuracy alongside log-likelihood in the revised manuscript, offering a more comprehensive assessment of the model's performance and connectivity. In the meanwhile, please check in the rebuttal PDF the replication of Figure 3 with the accuracy, along side with the requested zoom.
* **Data Augmentation and Cold Posterior Effect**: We appreciate the reviewer pointing out the potential impact of data augmentation on the cold posterior effect (CPE). No, data augmentation is never employed in our experiments (see general comments). Having said that, during this rebuttal we were able to run a comparison on the effect of the data augmentation alone (without tempered posteriors). In Figure 8 of the rebuttal PDF, we can see that the behavior for both with and without DA is very similar. We will conduct additional experiments with data augmentation and temperature scaling to provide a more comprehensive analysis of its influence on our results. Due to the limited time we were unable to have the comparison DA+temperature, but it will be added for the next version.
* **Test Accuracy Gains**: While analyzing the effect of tempered posterior is not our primary scope in our work, we report the results in the table below for ResNet20.
| Method | Accuracy |
| ----------- | -------- |
| Tempered VI | 0.8281 |
| VI | 0.8580 |
| SGD | 0.8546 |
Because we are using a clean likelihood (no batch statistics, and no DA) the results with VI are slightly worse but in line with the literature (see Appendix K4 in [92 from paper])
---
Rebuttal Comment 1.1:
Title: Response to Rebuttal
Comment: I thank the authors for running additional experiments in such a short amount of time!
**Tractability:** I agree that the Sinkhorn algorithm could be used to approximate the Wasserstein distance, but it would be tricky to reduce the permutation problem to anything tractable in this case, right? I might be missing something.
**Precision of Approximation and Interpretability:** I agree that the exact posterior remains elusive and understanding approximate methods is thus very valuable. I simply remain unsure whether permutations really say anything fundamental about approximate posteriors. While of course SGD is also not an exact minimizer of its objective, even if it were, the permutation problem still remains! Moreover, here the permutation symmetry really reveals something fundamental about the problem.
**Evaluation Metric:** Thank you for adding this! The precise definition of linear connectivity is always an issue in this line of work. What do the authors consider as connected here? Strictly speaking, none of the results are linearly-connected as the values do worsen, as evident now in the zoomed plot. I'm aware that similar results might also have passed as being "connected" in the literature, so I don't want to impose a new standard here, but it would be great if the authors could at least compare their connectivity values (for accuracy) with [1]. Especially the ResNet seems to worsen by almost 10% on the training data (removing the baseline here too would be helpful), which seems somewhat drastic. I hope the authors can clarify this.
**Data augmentation:** Thank you for clarifying this! It's nice that data augmentation does not affect results to drastically.
[1] Git Re-Basin: Merging Models modulo Permutation Symmetries, Ainsworth et al
---
Reply to Comment 1.1.1:
Comment: We thank the reviewer for these additional comments. Here's our reply:
* **Tractability**: We agree with the reviewer on this point. Extending this to sample-based inference is not going to be trivial nor easy, hence our comment on leaving this as future work.
* **Precision of Approximation and Interpretability**: We can give the reviewer one possible application where such permutation analysis can be beneficial for approximate inference: convergence analysis of SG-MCMC. We know by experience that classic convergence statistics (like R-Hat) are not robust to assess convergence behavior of MCMC chains in large models. In Figure 2 in [1] we see that despite unrealistic compute availability, the R-Hat statistics severely underestimates the convergences of the chains, due to the exploration by HMC of permutation equivalent modes. While it is possible to compute such statistic in function-space, technically this is not a standard practice, since the R-Hat becomes also a function of the inputs, rather than just being a property of the chains/samples. On the other hand, by accounting for permutation symmetries we could derive more appropriate convergence statistics for MCMC methods to be used in this context.
* **Evaluation Metric**: If we look at Figure 9 in [2] we see that for the MLP the results are actually very similar with ours (if not marginally worse, see CIFAR10/MLP). The biggest difference is indeed with ResNet20, where the only difference is the normalization layer (LayerNorm for [2] and FRN for us). We speculate this being the cause. Nonetheless, we want to highlight how our alignment method decreases the (accuracy) barrier by 85.89% on the train set and by 87.55% on the test set.
[1] Pavel et al. What Are Bayesian Neural Network Posteriors Really Like?
[2] Ainsworth et al. Git Re-Basin: Merging Models modulo Permutation Symmetries. | Summary: This work permutes together the distributions of bayesian neural network (BNN) parameters in the context of variational inference (VI) so that they are linearly connected. This is done by adapting recent work on permuting together SGD solutions to be linearly connected, by optimizing for similarity of means and variances from VI in the permutation assignment objective, instead of similarity of parameters (as in Ainsworth et al. 2022). The problem setup and permutation objective are described and derived in detail. Experiments show that this method is comparable in loss barrier to directly permuted SGD solutions.
Strengths: Bringing permutation alignment into a probabilistic setting is novel, and the formalization of the methodology is satisfying. The paper is well-organized, and a number of variations (variance of prior, temperature) are considered in the experiments. Despite its focus on BNNs, this work also has relevant implications for the SGD solution setting, since it would be nice to be able to extend linear mode connectivity from individual solutions to families of SGD solutions whose parameters are stochastic (e.g. due to randomness in initialization/training).
Weaknesses: In both the temperature and prior variance experiments, a comparison of barriers at the limit of 0 variance/temperature (corresponding to direct alignment of the MAP solution) for CIFAR-10 would be interesting, as it would put the observed difference between VI alignment and SGD alignment in figure 8 into context. Specifically, it would be nice to see figures 6 and 7 replicated on CIFAR-scale networks, and a reference line included in each figure to indicate the barrier achieved by SGD alignment.
From a presentation standpoint, figures 4-8 are arranged strangely (5 is out of order) and too small to read clearly. In particular, the text and lines should be larger, some of the margins smaller, and more distinct colors used. Figures 1-3 are less important from a readability standpoint, but may also benefit from the same adjustments.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: Given that barrier is higher when aligning VI distributions versus aligning the MAP parameters (akin to directly aligning SGD solutions), what are the potential benefits of optimizing alignment from a distributional standpoint? If the goal is to minimize barrier, why not simply align the MAP? Some motivation would be helpful here.
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 3 good
Limitations: The main limitation (which is briefly discussed in section 7, but not strongly emphasized throughout) is that the assumptions on the posterior distribution are too rigid: namely that parameters are independent Gaussians. The authors conjecture (line 239) that this may be the cause of reduced performance relative to Ainsworth et al. (2023). Given the derivation in (16) already admits arbitrary covariances, it seems very interesting to consider the case where the covariance of the posterior is non-diagonal (lines 275-276). This could also lead to significant innovations over the existing alignment algorithm of Ainsworth et al. (2023), which is largely unchanged in this work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for insightful comments and discussion points.
* **Temperature and prior for ResNet20/CIFAR**: Thanks for the suggestion, we will take this into consideration. During the limited time span of this rebuttal, we were able to run the ResNet comparison with different prior variances and different temperatures (see Figure 7 in the rebuttal PDF). Modulo the fact that finding zero-barrier is more difficult on ResNet (vs MLPs), the comments for prior variances in the paper generally applies for this experiment as well: "without alignment we see that naive VI solutions are easier to interpolate with lower barrier when the prior is more diffused. At the same time, we see that higher variances produce bigger gaps between train barriers and test barriers. We speculate that this is due to overfitting happening with more relaxed priors, which makes low-barrier (but low-likelihood) solutions easier to find". Similar for the temperature comparison, for which "we see that barriers for cold posteriors with alignment are marginally closer to zero than for warm posteriors". To conclude, in both cases the results are consistent with the comments in the paper, but we will add these additional experiments to strengthen the discussion. Thanks for the suggestion!
* **Presentation**: Thanks, this is already taken into consideration for the next version of the manuscript (see comment to Reviewer 2rWU). We will make sure to improve the readability of the plots.
* **Aligning using MAP**: Thanks for the question. Indeed, we could align the MAP solutions and take the permutation symmetry found. Still, we don't think this will work in practice for few reasons: (i) independent models do not share the same permutation matrices, (ii) if we look only at the means, the new results requested by Reviewer Kqgj show that this is indeed not the best objective for alignment.
* **Limitation on the posterior**: Yes, indeed extensions to non-diagonal covariances are possible. We haven't investigated this possibility yet for a couple of reasons: (i) full covariances with variational inference are intractable for these large models, resorting in various possible approximations to computationally scale to these networks (low-rank, structured covariance, etc.); (ii) with non-diagonal covariances the steps to go from Eq. (16) to the formulation in Eq. (19) would not be possible. A way to approach this could be by learning the permutation $P$ using a new ELBO together with a straight-through estimator:
$$
\max_{\widetilde{q_1}} \mathcal L_{elbo}\left(\frac 1 2 (Id+T_{q_0}^{\widetilde{P}\\#\widetilde q_1})\\# q_0 \right) \quad \text{where} \quad \widetilde{P} = \arg\min_P \mathcal W(P_\\# q_1, q_0)
$$
To make it tractable, the r.h.s. needs to approximate the full covariance with its diagonal components, while the l.h.s. should be computed with its full covariance. We believe that such discussion requires some careful analysis, which was beyond the scope of this work.
---
Rebuttal Comment 1.1:
Comment: Thank you for the additional results and the clarification regarding tractable posteriors. Given the new results show the advantage of including covariance in the alignment objective, I will raise my score. | Rebuttal 1:
Rebuttal: # General comments
First, we would like to thank the Reviewers for their comments and helpful feedback.
With this paper we analyze the geometry of the Bayesian posterior in deep neural networks by considering the permutation symmetries raising from the neural network parameterization. We do it by extending previous analysis on loss-optimized models to the Bayesian setting: this requires, among other things, to extend the concept of barrier and solutions interpolation.
In the variational inference setting, we move then to propose a methodology to align distributions, by framing this problem as a combinatorial optimization one. With this setup, we can find distributions that once interpolated exhibit low/zero barriers even for ResNet-scale models.
We are delighted to see that Reviewers agree on the novelty and quality of presentation (bvHV, Kavz, Kqgj, T6Qt, 2rWU), that our proposal is principled (2rWU) with rigorous experiments (bvHV, 2rWU) and interesting results (2rWU, Kqgj).
We use this space to address some common points:
* **Figures presentation**: several Reviewers (Kavz, 2rWU, hffp) commented on the readability of some of the plots in the submission. Indeed we had to squeeze the figures more than what we would have liked as to remain in the 9 pages, while presenting all the important results. We plan to remake the figures to address this point, possibly by converting the plots to LaTeX using tikzplotlib. Additionally, if allowed one additional page for the camera ready, we will be less constrained in the positioning of the figures, which should help.
* **Scope of the work and extensions beyond variational inference**: Reviewers 2rWU and T6Qt highlights how our initial conjecture is too broad, as it can cover all possible approximations (VI, Laplace, MCMC) while our work primarily focuses on VI. The idea for such a conjecture is to encourage further investigations with other setups and approximations. Nonetheless, we generally agree with this opinion and we will re-frame the Conjecture to better align with our context.
* **Data augmentation (DA)**: Reviewers bvHV and T6Qt expressed a limitation in our choice of not using data augmentation for our experimental setup. This choice was merely a consequence of the difficult interpretation of DA in the Bayesian context (as many previous works in the literature have shown, [e.g. from the paper 92,68,40]). Indeed, without DA (and without BatchNorm for normalization layers), we are provably targeting a correct Bayesian posterior. Having said that, for a sanity check we have run a ResNet20 on CIFAR10 with and without data augmentation, showing similar behaviour in both cases (Figure 8 in the rebuttal PDF).
* **Mixtures vs OT interpolation**: Reviewers 2rWU and T6Qt requested additional clarification on the choice of using optimal transport for the interpolation. While we address this point individually for both Reviewers, we want to give a generic intuition for this choice. We are interested in studying the properties of the approximate Bayesian posterior while we align the solutions w.r.t. permutations symmetries. With mixtures we essentially re-weighting the two solutions, without continuous "transport" of distribution mass between the two extremes. This means that if we look at the barrier, we don't see any not because they don't exist, but simply because we are interpolating in such a way that prevents us from exploring the geometry of the posterior. Said differently, mixtures do not lead us to the construction of low barrier solutions from permutation symmetries.
Below, a summary for the additional experiments run during the rebuttal and plots that you will find in the rebuttal PDF:
* **Figure 1**: Visualization of mixtures vs OT interpolation
* **Figures 2/3**: Two new visualizations of the posterior for aligned and not aligned solutions
* **Figure 4**: Analysis of features variance collapse in interpolated models
* **Figure 5**: Replication of Figure 3 from the paper, but using accuracy rather than likelihood
* **Figure 6**: Zoom of Figure 3 from the paper, with focus on the aligned models only
* **Figure 7**: Analysis of prior variance and posterior temperature on the solution connectivity for ResNet20
* **Figure 8**: Analysis of the effect of data augmentation on ResNet20
Here are the changes planned for the next version/camera-ready of the paper:
1. Change the conjecture in Sec. 1 to contextualize better our contributions
2. Include arguments for using optimal transport to interpolate solutions in Sec. 3
3. Fix the readability of some plots
4. Fix various typos and add a couple of missing references pointed out by the Reviewers
5. Add and comment the new experiments presented in this rebuttal on prior and temperature using ResNet20 models in Sec. 5
6. Add and comment the new experiments presented in this rebuttal on data augmentation in Sec. 5
Furthermore, the remaining discussion point of this rebuttal will be added to the Appendix.
Finally, in the threads below we will address the Reviewers' comments and questions in detail.
Pdf: /pdf/d195b766c369d572f6ca54e3fad13f7e3c653dbd.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: The authors study the geometry of SGD-trained Gaussian mean-field variational approximations to the posteriors of Bayesian neural networks (BNN). In large part, the authors propose extensions of the method and analysis of Ainsworth et al. [1] from MAP-estimated neural networks to BNNs. Notably, the authors informally conjecture that a permutation symmetry exists in the solutions that approximate Bayesian methods can find, similar to the permutation symmetry of MAP solutions SGD finds. To test this hypothesis, the authors first use optimal transport theory to extend the linear interpolation and the loss barrier framework of [1] to the variational posterior approximation setting. Then, they show that they can align individual mean-field Gaussian weight posteriors by approximately solving a bilinear assignment problem using an algorithm analogous to the one proposed by [1] for MAP solutions. The authors demonstrate empirically that zero-barrier interpolations exist for permutation-aligned variational posteriors of non-trivial architectures (ResNets) trained on non-trivial datasets (MNIST, FashionMNIST and CIFAR-10).
## References
[1] Ainsworth, S. K., Hayase, J., & Srinivasa, S. (2022). Git re-basin: Merging models modulo permutation symmetries. arXiv preprint arXiv:2209.04836.
Strengths: Understanding the properties of approximate solutions to the posteriors of BNNs is one of the central challenges of Bayesian deep learning; hence extending the method and analysis of Ainsworth et al. to the variational BNN setting is an important step towards this goal. The proposed extension of the interpolation framework and the alignment procedure using optimal transport theory seems natural, and the experimental methodology is sound and reasonably thorough. Besides the experiments analogous to the ones of Ainsworth et al., the authors also investigate the effect of prior variance and the cold posterior effect, the limit of which is the MAP solution. The paper is mostly well-written and easy to follow.
Weaknesses: While I did not find any significant weaknesses in the work, there are a couple of points that, if improved/clarified, could significantly strengthen the paper.
### Conjecture 1
First, I found Conjecture 1 too broad and imprecise, even compared to the conjecture given by Entezari et al. [1], mainly for two reasons. The conjecture states that:
"Solutions of approximate Bayesian inference for neural networks are linearly connected after accounting for functionally transparent permutations."
First, I think including the class of approximate Bayesian methods is too broad because it covers variational inference (VI), Markov chain Monte Carlo (MCMC), and the Laplace approximation, which all yield significantly different solutions. Hence, since the authors only study mean-field VI in the paper, I think it would be better to restrict the conjecture to this case only and potentially extend the conjecture to the other methods if future work supports it with some empirical evidence.
Related to the first point, it is thus unclear if there is a universal notion of "linear connectedness" for the approximate solutions I mention above. Furthermore, as both Entezari et al. [1] and Ainsworth et al. [2] pointed out, the modes' linear connectedness appears to be a particular feature of SGD. Hence, I would suggest that the authors rephrase their conjecture to something like
"SGD-based solutions of mean-field variational Bayesian inference for neural networks are linearly connected after accounting for functionally transparent permutations."
Thus, this is also a more direct generalization of the conjecture given in [1]. What do the authors think?
### Connection between the interpolation and the alignment process
The authors define the interpolation of two variational posteriors as well as the alignment procedure using optimal transport (OT) theory. However, as far as I can tell, these two things are not obviously connected. While I am not claiming that the authors' choice to use OT for these definitions is unnatural, it currently seems more of a choice of convenience than one dictated by theory. In particular, it seems that the authors chose this precisely because the alignment procedure reduces to a bilinear assignment problem, and the interpolating density is also Gaussian. Could the authors clarify whether there is some deeper theory that would justify the authors' choices? Does the alignment objective in eq (20) follow somehow from the interpolation method in eq (6)?
Relatedly, the authors state the following on L116:
"While we could interpolate using a mixture of the two solutions, we argue that this choice is trivial and does not fully give us a picture of the underlying loss landscape."
What the authors mean by the word "trivial" here is unclear. They appear to mean that interpolating with arithmetic mixtures is an "obvious" choice, not "trivial". The properties of the mixture choice are not at all obvious to me, and it is not clear why it doesn't provide us with information about the loss landscape. Again, I am not saying that the authors' choice of using OT is wrong and the arithmetic mixtures are better or equally useful; but they need to give arguments (either theoretical or empirical) why they think it is uninteresting.
In a similar vein, I wonder if using geometric mixtures, i.e. using $q_\tau \propto q_0^{1 - \beta} \cdot q_1^{\beta}$ for $\beta \in [0, 1]$, would have similar behaviour, or if it would lead to a different alignment procedure.
### Figure 5
Given the interpolation and alignment process, is the hyperplane in Figure 5 meaningful? The hyperplane is defined using a simple linear interpolation of the weight posteriors' mean parameters, which seems to go against all the previous machinery the authors argued for earlier in the paper. Perhaps if the authors want to include such a hyperplane, perhaps there's a definition that can be made using OT, or they could draw samples $ \theta_a \sim q_0, \theta_b \sim q_1, \theta_c \sim P_\sharp q_1$ and linearly interpolate those?
### Miscellaneous
Eq (21) has a small mistake, the argmax ranges over $i \in [0:L]$, but the objective only involves permutations with indices up to $L - 1$. Furthermore, Eq (21) could be compactified by defining P_0 = P_L = I and writing the sum using $\sum$ notation.
Line 4 in Algorithm 1 should be broken up into two or three lines, and the font size should be increased.
The font size in Figures 3-8 is too small and should be increased to match at least the font size of the captions.
The training procedure needs clarification. Did the authors use Bayes by Backprop [3] or the local reparameterization trick [4]?
## References
[1] Entezari, R., Sedghi, H., Saukh, O., & Neyshabur, B. (2021). The role of permutation invariance in linear mode connectivity of neural networks. arXiv preprint arXiv:2110.06296.
[2] Ainsworth, S. K., Hayase, J., & Srinivasa, S. (2022). Git re-basin: Merging models modulo permutation symmetries. arXiv preprint arXiv:2209.04836.
[3] Blundell, C., Cornebise, J., Kavukcuoglu, K., & Wierstra, D. (2015, June). Weight uncertainty in neural network. In International conference on machine learning (pp. 1613-1622). PMLR.
[4] Kingma, D. P., Salimans, T., & Welling, M. (2015). Variational dropout and the local reparameterization trick. Advances in neural information processing systems, 28.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors:
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations:
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for his/her interesting comments. Below we reply inline to the reviewer's questions:
* **Strength of conjecture 1**: We agree with you on this. We decided to have a broad conjecture to leave room for possible extensions to Laplace approximation (easier) and SG-MCMC methods (definitely more challenging). Since this point was raise by Reviewer T6Qt as well, we think it's appropriate to rephrase it to limit the context under analysis. Thanks for the suggestion.
* **Connection between interpolation and alignment**: That is correct: technically Eq (20) and Eq (6) are independent and indeed other choices can be made (interpolation using the mixtures and alignment the KL divergence). For the interpolation, the choice of the Wasserstein geodesic was dictated by the geometry of the interpolation paths between the two distributions (akin to linear interpolation for the Euclidean metric, more on this in the next point). Additionally, the use of Wasserstein distance for the alignment conveniently allows us to re-interpret the problem as an assignment problem, for which efficient routines are available. In the appendix, we briefly show why we think the KL distribution is not the best objective we can use to align distributions. Indeed, other choices can be made (see new experiment run for Reviewer Kqgj).
* **"Mixtures are trivial"**: we realized that "trivial" was a poor choice of word in this context. What we meant by that is that the mixture of distribution is not sufficient to capture the underlying complex geometry of the posterior. To aid this conversation, please have a look at the Figure 1 in the PDF. Here, we plot the test likelihood with two interpolation strategies: OT vs mixture (both without any alignment). With mixtures, we see that the likelihood is pretty much constant during the interpolation, but this is very miss-leading: we don't see barriers not because they don't exist, but because the mixture simply re-weights the distributions, without continuously transporting mass in the parameter space.
We will make sure to reword this passage more appropriately.
* **Figure 5**: Thanks for the suggestion. Indeed, this might not be the best way to visualize the underlying structure. Nonetheless, we believe that the visualization has still value, as it shows the different connectivity property between the aligned and the not-aligned solutions. Having said that, we took the Reviewer's opinion into account and we prepared another visualization (see Figure 2 and 3 in the rebuttal PDF). In this case we sampled from the two solutions, as well as from the interpolated one (for $\tau=0.5$), with and without alignment. We then use these three sample as support to build the projecting hyperplane. For sake of visualization, these three samples once projected are not on a line, but in the original space they are. With alignment, we see that the three samples all lie in a relatively flat region on the posterior, in contrast to the case of no alignment where we once more see the raising of a barrier when connecting the three samples.
* **Miscellaneous**: Thanks for spotting the mistake in Eq (21), you are right. Font size and figure placement were known issues, unfortunately we didn't find other arrangements that would fit in 9 pages. Eventually, if allowed to add an additional page for the camera-ready, we will make sure to fix this. For the training procedure, we refer back to the details in the Appendix (we are indeed using the reparameterization trick, not the local)
---
Rebuttal Comment 1.1:
Title: Response to the Authors
Comment: I thank the authors for their rebuttal. After reading the other reviews and the authors' rebuttals, I maintain that the paper should be accepted and I raise my confidence score to reflect this.
I thank the authors for providing a very nice explanation of why the mixture interpolation is unhelpful. I think they should also include Figure 1 in the appendix of the camera-ready version of the paper, as it would help readers who do not have a strong background in OT (like myself) to understand the authors' choices much easier. Though they did not address my point regarding geometric mixtures, I am now much more convinced that the OT formulation is more appropriate, as it captures the intuition of linear interpolation analogous to the point-estimate setting.
*Regarding the disconnect between the interpolation and alignment procedure:* I think, in a certain way, the missing puzzle piece for me regarding the OT-based formulation is how SGD is connected to it. In particular, I wonder if using SGD to train the VI solution could be construed as some sort of OT procedure to transport the initial posterior guess to some optimal one. For example, is it connected to the Wasserstein gradient flow (Salim et al., 2020)? I think some form of a positive answer would conclusively put my worries to rest since this would shed light on how the VI setting generalizes the point-estimate setting. If SGD on VI was performing OT, it would be much more intuitively obvious why we should expect the OT interpolation and alignment procedures to be the right things to look at. Perhaps this could also help establish a connection between the two, though this is more of a shot in the dark on my part.
## References
Salim, A., Korba, A., & Luise, G. (2020). The Wasserstein proximal gradient algorithm. Advances in Neural Information Processing Systems, 33, 12356-12366.
---
Reply to Comment 1.1.1:
Comment: Thanks for reply.
Regarding the geometric mixture, we apologize for missing this point earlier. We are not sure how the interpolation paths will look like with a geometric mixture, especially at the extremes ($\tau=\{0,1\}$). Indeed if $q$ is Gaussian, we have
$$
q^\tau \propto \exp\left(-\frac \tau 2 \left(\frac{x-\mu}{\sigma}\right)^2 \right) = \exp\left(-\frac 1 2 \left(\frac{x-\mu}{\sigma/\sqrt{\tau}}\right)^2 \right)
$$
which is the un-normalized PDF of a Gaussian RV with variance $\sigma^2/\tau$ (with $\tau \rightarrow 0$ we have a degenerate distribution).
Regarding the connection with the Wasserstein gradient flow, an interesting reference connecting VI and OT could be [1]. In this paper, the authors show how the dynamics of the mean and (co)variance of Gaussian VI follows the gradient flow of the Kullback–Leibler (KL) divergence $KL(\cdot \| \pi)$ on a submanifold of the Wasserstein space of Gaussian distributions on $\mathcal P_2(\mathbb R^d)$ (known as Bures–Wasserstein manifold), which is equipped with the 2-Wasserstein distance.
Additionally, the authors also show an SGD version for the discretization of the Bures–Wasserstein gradient flow, which might represent an interesting results to further characterize the properties of the approximate neural network posterior.
Thanks again for this discussion.
[1] Marc Lambert, Sinho Chewi, Francis Bach, Silvère Bonnabel, Philippe Rigollet. Variational inference via Wasserstein gradient flows. NeurIPS 2022 | Summary: This paper paper extends linear mode connectivity modulo permutation to Bayesian neural networks posteriors. Authors do this by imposing a Wasserstein metric on the space of distributions and look at how log likelihood changes along the geodesic between two distributions obtained via approximate Bayesian inference. For experiments, authors study BNNs parametrized by multivariate Gaussians with diagonal covariance. Analogous to weight matching algorithm proposed in [1], authors propose to find permutation that minimizes Wasserstein distance between the two distributions. For Gaussian distribution with diagonal covariance, authors propose an algorithm similar to weight matching that also takes into account the variances of the Gaussian. In experiments authors show that this heuristic for finding permutation that leads to small change in log likelihood along the geodesic.
*[1] Git Re-Basin: Merging Models modulo Permutation Symmetries. Ainsworth et al.*
Strengths: - Overall a well-written paper that extends previously studied linear mode connectivity modulo permutation of neural networks to Bayesian neural networks.
- Experiments demonstrate that the heuristic proposed to compute permutations works for network architectures and datasets studied in the paper.
Weaknesses: - Limitations of the proposed heuristic are not extensively discussed in the paper. For instance model architectures with batch norm typically fails without additional fixes [2].
- It’s not clear if the proposed algorithm out performs standard weight-matching / activation matching / STE estimator based approach to finding permutation discussed in [1].
*[2] REPAIR: REnormalizing Permuted Activations for Interpolation Repair. Jordan et al.*
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Empirically, how important is the role of adding covariance matrix in the objective when computing the permutations? For instance, using standard weight-matching in this setting just assumes that the covariance matrix is identity. it would be useful to see a table with comparison of different approaches.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: This paper limits itself on studying connectivity modulo permutation for approximate Bayesian NNs posteriors found via gradient based algorithms. It’s not clear if this phenomenon holds for other approaches which should be interesting future work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for his/her interesting comments:
* **Model architecture**: we agree that the choice of model architecture can impact the ability to find good solutions with low/zero barrier. One of this choice is the width of the neural network, for which the wider the model, the lower the barrier is. We are aware that the choice of the normalization strategy (especially batch-dependent) can affect the overall geometry of the problem (exactly how demonstrated in Fig.3 in your reference). Additionally, batch-dependent normalization layers don't have a clear Bayesian interpretation (technically, the likelihood in line 84 cannot factorize). That's why we chose to use the Filter-Response-Normalization layer, as done in other previous work [e.g. 40 in paper]. Having said that, we think it would be possible to apply the REPAIR method to our case as well, but I would expect the FRN to behave similarly to LN. For quick sanity check, we analyze the variance of activation following the instructions in [2], with the sole difference that the activations are marginalized w.r.t. samples from the posterior; for both naïve interpolation and aligned interpolation we then compute the variance ratio as discussed in Section 3.1 of [2] (we take $\tau=0.5$ as middle point between the two “endpoints”). Results for a couple of models are available in the rebuttal PDF. We can see that there seems to be a level of collapse for the various configurations, which despite being less pathological than the one shown in [2, Figure 2] and improving using the distribution aligned method, appears to be an effect present in the Bayesian settings as well. Finally, we realized that REPAIR is not cited in the current version of the paper, we will make sure to add it in the next version. Thanks for the interesting point!
* **Performance w.r.t. to [1]**: Figure 8 suggests that weight-matching (which was overall performing the best among the three in [1]), can achieve slightly better performance w.r.t. distribution alignment especially for wide networks (albeit with overall lower likelihood).
* **Effect of (co)variance**: please refer to the table below, where we report the barrier with different alignment objectives: the naïve interpolation (without any alignment) and two types of alignment objective (full, which reflects the correct objective as derived in Eq. (21) from the Wasserstein distance, and mean, which disregards the information regarding the variance). For the moment, we report this for two ResNet architectures and one MLP, all trained with CIFAR10. As we can see, we consistently get better results when we use the proper full objective. For the next version of the manuscript, we will make sure to complete the table.
| Method | Model | Barrier |
| ------------------- |:---------- | ------------ |
| Aligned (full) | ResNet20x2 | **1.486** |
| Aligned (mean only) | ResNet20x2 | 1.492 |
| No alignment | ResNet20x2 | 1.911 |
| Aligned (full) | ResNet20 | **1.789** |
| Aligned (mean only) | ResNet20 | 1.993 |
| No alignment | ResNet20 | 2.067 |
| Aligned (full) | MLP | **0.028** |
| Aligned (mean only) | MLP | **0.028** |
| No alignment | MLP | 0.880 |
---
Rebuttal Comment 1.1:
Title: Response to the rebuttal
Comment: Thank you for running additional experiments. It does appear that adding covariance to the objective indeed leads to relatively smaller loss barrier. I will keep my original score. | null | null | null | null |
End-to-End Meta-Bayesian Optimisation with Transformer Neural Processes | Accept (poster) | Summary: The authors propose Neural Acquisition Process, a novel method for Bayesian optimization by meta-learning a function that jointly performs the surrogate and acquisition steps. The function, i.e. a transformer, is able to work as a policy that predicts the action **a** given a state **s**, where the action is the hyperparameter to observe. The state comprises the history of the observed samples. Also, the transformer can predict probabilistic the performance of a hyperparameter. They use reinforcement learning to train the policy and, additionally, they use negative log-likelihood of the predicted performance for meta-tasks as an auxiliary loss. The authors test the method on 4 different benchmarks and delineate its difference from previous work, demonstrating the relevance of their approach.
Strengths: * They present a novel and effective method for pre-training a transformer to perform Bayesian optimization. The method is well-founded and all the parts make sense from an engineering perspective. For instance, the use of auxiliary losses and reinforcement learning appears as a very important and interesting approach.
* They establish a difference from prior work in terms of some parameters such as history order invariance, F values, number of tokens, etc.
* The authors provide a strong experimental protocol, comparing to relevant baselines in a broad set of datasets and benchmarks, which makes the empirical results very strong.
Weaknesses: * The method might be overkill for small search spaces and nonexpensive functions. However, many black-box functions nowadays, such as deep neural networks, are expensive to evaluate, thus this approach might be highly relevant.
* The authors do not present a comparison against time. How does the method perform in terms of time?
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: * Why is there no confidence bar for the MIP dataset?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 3 good
Limitations: No important limitation is detected.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their time and their remarks as well as underlining the soundness of our experimental protocol. We try to reply to their concerns below.
>The method might be overkill for small search spaces and nonexpensive functions. However, many black-box functions nowadays, such as deep neural networks, are expensive to evaluate, thus this approach might be highly relevant.
Indeed, for small search spaces and non-expensive black-box functions, our method might be overkill, and so would regular BO to a certain extent because if querying the objective is cheap, sample efficiency becomes less important than time efficiency. If the objective becomes very expensive to evaluate, then black-box solvers become useful. We keep our experiments limited to this case, in which sample efficiency is most important as the main bottleneck in terms of cost and time is querying the objective, and this cost outweighs the cost of any model fitting or pretraining done offline or online. We conduct experiments in search spaces that can be considered small (e,g, in HPO-B) but also in larger ones, e.g. Antibody design and EDA in combinatorial space and MIP in high-dimension and mixed space (up to 135 dimensions).
>The authors do not present a comparison against time. How does the method perform in terms of time?
We have to distinguish the time during testing and the time during pretraining. Methods that make use of GP surrogates are bound to be slower in terms of time compared to NAP and NP-EI that only use forward passes in the network to decide what is the next query point. At each step of BO at test time, for example FSBO [1] or MetaBO [2] have to fit a GP model. However, pretraining time is more important for NAP, and MetaBO compared with NP-EI as the latter doesn't use RL which is computationally more costly during offline pretraining. They also take more time to pretrain than FSBO that meta-learns a feature extractor for a GP (deep kernel). The GP-EI baseline has no pretraining as it is a classical BO method with no amount of meta-learning. Finally, we mention a more detailed comparison with Optformer-EI [3] in appendix B.2. showing that NAP is faster to train, evaluate and uses far less memory.
It is natural that there is a tradeoff between pretraining cost and regret performance at test time, but as the most important metric in our setting is sample efficiency at test time, we did not show the results versus time.
During testing, we observed a rough estimate of 10x speedup compared to GPs-based methods that requires refitting. We will add those numbers to the appendix for reference.
>Question: Why is there no confidence bar for the MIP dataset?
The error bars are visible if you open the PDF file in Acrobat Reader. We also found that they are visible when opening it with Firefox. But indeed, we are not sure why they are sadly not visible when opening the PDF with Chrome. We are working on finding where this issue comes from.
----------
[1] Martin Wistuba and Josif Grabocka. Few-shot bayesian optimization with deep kernel surrogates. In 9th International Conference on Learning Representations, ICLR 2021, Virtual Event, Austria, May 3-7, 2021.
[2] Michael Volpp, Lukas P. Fröhlich, Kirsten Fischer, Andreas Doerr, Stefan Falkner, Frank Hutter, and Christian Daniel. Meta-learning acquisition functions for transfer learning in bayesian optimization. In 8th International Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020.
[3] Yutian Chen, Xingyou Song, Chansoo Lee, Zi Wang, Richard Zhang, David Dohan, Kazuya Kawakami, Greg Kochanski, Arnaud Doucet, Marc Aurelio Ranzato, Sagi Perel, and Nando de Freitas. Towards learning universal hyperparameter optimizers with transformers. In S. Koyejo, S. Mohamed, A. Agarwal, D. Belgrave, K. Cho, and A. Oh, editors, Advances in Neural Information Processing Systems, volume 35, pages 32053–32068. Curran Associates, Inc., 2022.
---
Rebuttal Comment 1.1:
Title: Clarification
Comment: The authors may excuse my unclear comment with respect to time. What I was interested in was test time and costly it is. For example, if you consider the compute overhead of your smart searcher into account, is a different able to evaluate more configurations in the same time and therefore find a better solution? Effectively Figure 2 with wall clock time on the x-axis.
Edit: Sorry, not my review. I'm interested anyway :D
---
Reply to Comment 1.1.1:
Comment: As a followup to our answer to reviewer dCjX as well as this official comment from reviewer 6XK8, we summarise some average test running time results in the hope to provide both with more details. These results will be concatenated in a table that we are happy to add to the appendix.
As mentioned before in our rebuttal of reviewer dCjX, we are in a BO setting where querying the black-box is considered to be the main bottleneck. For example, in the antibody experiment, this could be true both in terms of monetary and time costs as evaluating the objective could mean manufacturing the molecule and testing it in a wet-lab experiment. Furthermore, we do not have the actual times of the black-box evaluations for some experiments. For instance, the authors of the HPO-B dataset do not report those numbers. These are rather costly models to train and test and it would be prohibitively expensive to query the black-box ourselves. We use result files posted on the authors' repository which only contain black-box values for some baselines. Similarly, we do not have the real black-box evaluation time for the Antibody experiment as the data collection was done through a simulator.
What we do have, however, is the time to evaluate the black-box in the MIP and EDA experiments. By design of the experiment, evaluating one set of hyperparameters on the MIP experiment takes 2 hours. Compared to that, the time to train a GP model or doing a forward pass in NAP at test time is negligible. On EDA, the black-box time depends on the circuit so we approximate an average running time of 1 minute per circuit on open-source circuits, but this can take several hours on industrial circuits.
The tables below compares the average test time of 1 seed across various methods (from figure 2 in the paper) without the black-box time taken into account, then with the black-box time taken into account and finally with both the black-box time and the pretraining time taken into account.
In the first column, we can see that methods which have to fit a GP during the BO loop (FSBO, MetaBO and GP-EI) are considerably slowed down compared to methods like NAP that only do forward passes through their network. This is because fitting the GP surrogate at each BO step is time consuming, and increasingly so, as its dominant computational cost is cubic in the number of observed points. Note also that FSBO not only fits a GP at each step but also fine tunes the MLP of its deep kernel, hence the extra time.
The second column with the black-box time taken into account further underlines that even though NAP is faster at test time than e.g. FSBO or GP-EI, this time gain it is negligible compared to the blackbox evalutions.
The third column takes into account the pretraining time for methods that require it. Note that for different test functions within the same search space, we can reuse the same model for NAP, NP-EI, MetaBO and FSBO without having to redo the pretraining, so we divided the pretraining time by the number of seeds and test functions.
Hence, it does not add much time to the total.
It should be underlined that this way of presenting BO results is less readable than presenting regret vs BO steps as the more seeds and test tasks we have, the more negligible the pretraining time becomes compared to the black-box evaluation time.
**Table 1: MIP experiment - average test time of 1 seed**
| Method | without bbox | with bbox | with bbox & pretrain |
| --------------- | --------------- | --------------- | --------------- |
| GP-EI | 585sec | 25d 0hr 9min 45sec | 25d 0hr 9min 45sec |
| FSBO | 330sec | 25d 0hr 5min 30sec | 25d 0hr 10min |
| MetaBO | 30sec | 25d 0hr 0min 30sec | 25d 0hr 12min |
| NP-EI | 2sec | 25d 0hr 0min 2sec | 25d 0hr 36min |
| NAP | 3sec | 25d 0hr 0min 3sec | 25d 1hr |
**Table 2: EDA experiment - average test time of one seed**
| Method | without bbox | with bbox | with bbox & pretrain |
| --------------- | --------------- | --------------- | --------------- |
| GP-EI | 17sec | 1hr 5min 17sec | 1hr 5min 17sec |
| FSBO | 516sec | 1hr 13min 36sec | 1hr 14min 6sec |
| MetaBO | 35sec | 1hr 5min 35sec | 1hr 12min 5sec |
| NP-EI | 8sec | 1hr 5min 8sec | 1hr 7min 34sec |
| NAP | 9sec | 1hr 5min 9sec | 1hr 7min 42sec | | Summary: This paper proposed an end-to-end transformer-based framework for meta-Bayesian optimization (meta-BO). It formulated the meta-BO as a RL problem, defining a MDP in which a policy can be trained to solve the meta-BO problem. To help the training of RL algorithm, this paper also proposed an inductive bias via an auxiliary loss. The experiments had demonstrated the effectiveness of the proposed method.
Strengths: 1. This paper deals with a good problem, which has a lot of applications in real world. The formulation of MDP is also reasonable.
2. This paper is well written, and also easy to follow.
3. The proposed inductive bias via an auxiliary loss Section 3.3. is insightful.
Weaknesses: 1. The experiments is insufficient. In the current experiments, the author only provided the evaluation results of the proposed method NAP and other baselines on several benchmarks. However, it would be better to do some ablation studies to further analyze NAP. For example, it should be important to investigate the effect of the auxiliary loss Eq(3).
2. The applications are limited. RL is known to be sample inefficient, especially when the state/action spaces are of high dimensionality. Furthermore, NAP utilized a transformer to extract historic information, which is also relatively hard to train. Therefore, NAP may be implicitly restricted into situations when x is of low dimension, limiting its applications.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Please refer to weakness.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their positive comments on the applicability of our work and we try to answer their concerns below.
>The experiments is insufficient. In the current experiments, the author only provided the evaluation results of the proposed method NAP and other baselines on several benchmarks. However, it would be better to do some ablation studies to further analyze NAP. For example, it should be important to investigate the effect of the auxiliary loss Eq(3).
We would like to highlight that we did an ablation study in Section C.1 of the Appendix.
In Table 3 we show for each version of NAP, the components that are present or absent. Figure 4 shows the regret results on another experiment, an HPO experiment on XGBoost hyperparameters. We show that NAP-RL, trained only through the RL loss, performs worse than NAP, emphasising the importance of the auxiliary loss for downstream performance.
>The applications are limited. RL is known to be sample inefficient, especially when the state/action spaces are of high dimensionality. Furthermore, NAP utilized a transformer to extract historic information, which is also relatively hard to train. Therefore, NAP may be implicitly restricted into situations when x is of low dimension, limiting its applications.
Note that we pre-train NAP with RL, as such it is **offline** and therefore does not impact the **online** sample efficiency of NAP at test time as we only use online observations to select the next query.
In BO in general, the metric that matters most is online sample efficiency because we assume that we are in a setting where the black-box objective is costly and evaluating it is the main bottleneck.
Furthermore, we show that even with a dataset of rather limited size, we can pre-train NAP e.g. on the MIP experiment where the search space is of high dimension (135), and are still able to learn useful information transferable to new tasks that leads to best performance. We do acknowledge some limitations, for instance that NAP is restricted to be trained and tested on a search space of similar dimension (discussed in Limitations section) and we would like to extend it to be dimension-agnostic in a future work. Finally, we acknowledge the difficulty of training a transformer with RL signals only (underlined in the ablation), hence our use of the auxiliary loss.
---
Rebuttal Comment 1.1:
Comment: Thanks for your responses. | Summary: The authors present a hyperparameter optimization method that is based on a transformer. In contrast to other recent work on HPO with transformers, this method does not rely on an acquisition function but instead trains an end-to-end model that outputs the acquisition scores directly.
Strengths: **Diversity of benchmark tasks**
Experiments on 4 different types of tasks, comparison against state-of-the-art methods. Demonstrate some improvements over baselines.
**Clear Motivation**
Use of transformers have shown promising results. End-to-end learning is always a good idea. The authors combine these previously existing ideas.
Weaknesses: **Complexity of the Method**
The authors combine transformers, reinforcement learning and neural processes. There are most likely a lot of knobs and using this method effectively might be very difficult. Meta-BO is a prime example of a method that adds quite a bit of complexity and it turned out that it did not generalize to any other setup than considered by the authors.
**Novelty**
This is neither the first paper to use transfer learning, transformers, RL, or end-to-end learning for HPO. This is a paper that combines all of it to a new method.
A paper not discussed in this work: Bing-Jing Hsieh, Ping-Chun Hsieh, Xi Liu: Reinforced Few-Shot Acquisition Function Learning for Bayesian Optimization. NeurIPS 2021: 7718-7731
It also trains an end-to-end model and the authors demonstrated some improvements over FSBO.
**Some Overclaiming of Contributions**
In multiple locations, the authors claim to present the " first end-to-end training protocol for meta-BO". This is not the case, see reference above and MetaBO.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: This is a complex method that probably introduce several hyperparameters itself. How do you set them? How robust are they? Do you think there is a chance that this method can be directly applied to a different task or will it require significant work to get it running?
Why is NP-EI missing for HPO-B?
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The authors do not address the limitations that come with the complexity of this method, i.e., additional hyperparameters.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their time and for stressing the importance of end-to-end training. We try to answer the questions regarding complexity, novelty and hyperparameter tuning below.
>The authors combine transformers, reinforcement learning and neural processes. There are most likely a lot of knobs and using this method effectively might be very difficult. Meta-BO is a prime example of a method that adds quite a bit of complexity and it turned out that it did not generalize to any other setup than considered by the authors.
>This is a complex method that probably introduce several hyperparameters itself. How do you set them? How robust are they? Do you think there is a chance that this method can be directly applied to a different task or will it require significant work to get it running?
As this comment and question relate to similar topics, we answer them together. Our method involves a good amount of pretraining on source task data but this is not unlike other meta-BO methods. In our experiments we give the same budget for pretraining to NAP and MetaBO. The hyperparameters used are all provided in Table 2 in appendix and they remain the same across our experiments. We set their values based on the ones found in the repository of the MetaBO [1] baseline and did not change them further. We also fix an equal weight between the RL loss and the supervised auxiliary loss. To preserve fair comparison between all methods and because we assume the hyperparameters set by the authors of each baseline are already optimised, we do not tune ours. In that regard, we believe that this makes NAP more portable to new tasks as the hyperparameters have not been tuned for each specific task.
We acknowledge the difficulty of applying MetaBO in other domains. However, we believe that it comes from hypotheses made by the authors (that the parameters of the GPs are fixed during training and testing).
By tackling 9 heterogeneous tasks (6 in HPO-B), we show that NAP is not suffering from the same problem and achieve good performance across tasks.
>Novelty: This is neither the first paper to use transfer learning, transformers, RL, or end-to-end learning for HPO. This is a paper that combines all of it to a new method. A paper not discussed in this work: Bing-Jing Hsieh, Ping-Chun Hsieh, Xi Liu: Reinforced Few-Shot Acquisition Function Learning for Bayesian Optimization. NeurIPS 2021: 7718-7731 It also trains an end-to-end model and the authors demonstrated some improvements over FSBO.
>Some Overclaiming of Contributions: In multiple locations, the authors claim to present the " first end-to-end training protocol for meta-BO". This is not the case, see reference above and MetaBO.
These two remarks are also linked so we answer them together. We refer to the work by Hsieh et al. on a few occasions in our paper (ref [14] in bib, [2] in this comment) but thank you for giving us a chance to expand on the differences between their approach and ours. We will also add these arguments in the Related Work section.
First, we would like to outline that the setting in [2] is slightly different from our setting in that it aims to do *adaptation*, i.e. few-shot learning; which means that the authors meta-learn a neural acquisition on some source task data as well as data generated from a prior, but then allow their model to have access to test task observations to fine tune. Moreover, as PACOH [3], they rely on ensemble-method with Stein Variational Gradient Descent (SVGD). We consider these prior works as orthogonal directions as they can be combined with NAP: we could also use SVGD with multiple transformers or to finetune NAP at test time.
Second, we can argue that FSAF is not end-to-end differentiable because it still relies on Gaussian Process surrogates (also the case for the MetaBO [1] baseline) and hence is classified as a two stage approach. In their paper in section 3.1 the authors rely on the posterior mean and variance of a GP for state-action representation of each point.
What we are claiming is to be the first **end-to-end differentiable** method for meta-BO that specifically generalises neural processes to learn acquisition functions directly from the raw observations. We will make that distinction clearer in the paper, in abstract and introduction when talking about contributions.
>NP-EI missing for HPO-B
Thank you for pointing that out, we added NP-EI as a baseline in the HPO-B experiment, please see figures of the uploaded rebuttal PDF.
----------
[1] Michael Volpp, Lukas P. Fröhlich, Kirsten Fischer, Andreas Doerr, Stefan Falkner, Frank Hutter, and Christian Daniel. Meta-learning acquisition functions for transfer learning in bayesian optimization. In 8th International Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020.
[2] Bing-Jing Hsieh, Ping-Chun Hsieh, and Xi Liu. Reinforced few-shot acquisition function learning for bayesian optimization. In M. Ranzato, A. Beygelzimer, Y. Dauphin, P.S. Liang, and J. Wortman Vaughan, editors, Advances in Neural Information Processing Systems, volume 34, pages 7718–7731. Curran Associates, Inc., 2021.
[3] Jonas Rothfuss, Vincent Fortuin, Martin Josifoski, and Andreas Krause. PACOH: bayes-optimal meta-learning with pac-guarantees. In Marina Meila and Tong Zhang, editors, Proceedings of the 38th International Conference on Machine Learning, ICML 2021, 18-24 July 2021, Virtual Event, volume 139 of Proceedings of Machine Learning Research, pages 9116–9126. PMLR, 2021.
---
Rebuttal Comment 1.1:
Title: Follow-up
Comment: > FSAF is not end-to-end differentiable because it still relies on Gaussian Process surrogates
Can you elaborate in what way GP surrogates make it non-differentiable?
---
Reply to Comment 1.1.1:
Comment: We apologise if our answer was misleading, what we meant was that FSAF (Hshieh et al., 2021) and MetaBO (Volpp et al., 2020) are not end-to-end differentiable because the authors use **pretrained** GP surrogates. Indeed, during the pretraining of the neural AF, the authors use a GP surrogate with parameters fit on data from the source task. They then keep those parameters fixed during RL training of the neural AF. Because of that, their approach is considered to be a two-stage approach.
In theory, however, it would be possible to train a GP surrogate as well as a neural acquisition function in an end-to-end fashion, through RL. We see at least two options for such a setup.
The first option would be to pretrain a GP on all source task data and then, during RL training of the neural AF, also take gradient steps in the GP to update its mean, kernel and likelihood parameters.
The second option would be to directly train a GP from scratch together with the neural AF with RL and an auxiliary loss that would correspond to the marginal likelihood.
This would be an entirely new baseline as we are not aware of any work that is training a GP surrogate and a neural acquisition function completely end-to-end in a way that corresponds to the options described above.
If the reviewer still would like this to be studied, we are open to setting up new experiments to test this method, however, we are not sure how well that would work in practice and we have limited time at our disposal until the end of this discussion. | Summary: This work proposes an approach to meta-Bayesian optimization (BO) that features a single transformer neural process architecture for both the meta-surrogate function and meta-acquisition function, which are respectively trained via standard neural process maximum likelihood and model-free meta-reinforcement learning (in the style of RL^2) frameworks. The authors demonstrate the empirical benefits of such an approach over prior meta-BO methods on four suites of black-box optimization problems.
Strengths: ### Originality
My background is in meta-learning and reinforcement learning, not Bayesian optimization. It appears that the proposed approach is novel for meta-BO in that it makes minimal modelling choices, opting for a completely black-box neural process for the meta-surrogate function as opposed to e.g. a Gaussian process, and a completely black-box acquisition function as opposed to e.g. expected improvement (EI).
### Quality
The empirical evaluation seems reasonably thorough (though again, I am not an expert in BO), with experiments based on multiple data sources and a healthy pool of competing methods. I also appreciated the described engineering effort in adapting previous methods to handle datasets for which they were not originally designed for.
### Clarity
I'm not sure that Lemma 3.1 adds much substance in favor of the paper's design decisions. Instead, perhaps we can more loosely but intuitively argue that typical acquisition functions such as EI depend on the surrogate function's predictions of the underlying objective function (as it well should, to avoid no free lunch), but this dependency is dropped with a completely end-to-end black box acquisition function. The neural process meta-surrogate training can then be justified as softly reintroducing this dependency a la multi-task learning (multi in the sense of surrogate function modeling and acquisition function policy learning, not multiple BO tasks) via a shared architecture.
### Significance
I imagine the demonstrated empirical benefit of an end-to-end solution in a domain where handcrafted solution components have been standard will have considerable significance.
Weaknesses: - A minor point -- it is unusual to see one maximizing a "loss" (Eq. 3 and Alg. 1).
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: - How is your framing of the acquisition function as a policy with order-invariant processing of the current optimization trace related to ideas in context-based/prototype-based meta-reinforcement learning [A]? It seems like the former is a specific instance of the latter where the state space for each MDP is singleton.
[A] Rakelly & Zhou et al., Efficient Off-Policy Meta-Reinforcement Learning via Probabilistic Context Variables, ICML 2019.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their comment, for the remark on extending baselines to search spaces they were not designed for originally, and for raising an interesting point regarding meta-RL, We will answer point by point below.
>I'm not sure that Lemma 3.1 adds much substance in favor of the paper's design decisions. Instead, perhaps we can more loosely but intuitively argue that typical acquisition functions such as EI depend on the surrogate function's predictions of the underlying objective function (as it well should, to avoid no free lunch), but this dependency is dropped with a completely end-to-end black box acquisition function. The neural process meta-surrogate training can then be justified as softly reintroducing this dependency a la multi-task learning (multi in the sense of surrogate function modeling and acquisition function policy learning, not multiple BO tasks) via a shared architecture.
Regarding clarity, the lemma 3.1 is a formal explanation of the sparsity of the reward used in RL, specifically linked to the use for BO.
This issue is well-known in many RL problems but we wanted to make it clear that it is also the case in a meta-BO setting. This in itself, does justify the use of an auxiliary loss, but we also agree with the intuition you mentioned for the choice of the chosen auxiliary loss.
>How is your framing of the acquisition function as a policy with order-invariant processing of the current optimization trace related to ideas in context-based/prototype-based meta-reinforcement learning [A]? It seems like the former is a specific instance of the latter where the state space for each MDP is singleton.
If we want to match the history order-invariance of our policy with the tuple order-invariance of the mentioned paper, the tuple $t_0$, ..., $t_{n-1}$ without the reward should be defined like $t_0 = (s=\{\}, a=x_0, s'=f(x_0))$ and $t_i = (s=x_{i-1}, a=x_i, s'=f(x_i))$.
We believe this is what you meant by a singleton state-space.
If it is doable in practice, it does not form a standard MDP because the reward function cannot be defined based on a single tuple.
Instead, a better instantiation would define the state in the tuple as a history of collected points $(x_i, f(x_i))$, however, in such case, the history order-invariance is not necessarily preserved through tuple order-invariance.
This is an interesting point however, and we are going to include the mentioned paper in the related work section.
---
Rebuttal Comment 1.1:
Title: Confidence Update
Comment: Thank you for your responses to my questions. I have increased my rating confidence to a 3. | Rebuttal 1:
Rebuttal: We thank all reviewers for their time reading our paper and writing reviews. We are very appreciative of all positive comments made about our work, on clarity, applicability of the method, quality of experiments and baselines implementation. We further thank all reviewers for raising interesting points and questions about motivation, novelty and hyperparameters. We address each of the reviews separately in their own threads but we summarize here the main remarks.
### Novelty
We discuss prior works that make use of transformers for BO as well as prior work using RL to learn acquisition functions. However, to the best of our knowledge, we are the first to propose an **end-to-end differentiable** method that specifically learns acquisition values with RL and is based on a transformer neural process architecture. We will clarify this claim in the main paper. We also discuss orthogonal approaches that learn a better prior from the source tasks data for pretraining, as well as using RL for learning acquisitions in order to do better adaptation (fine tuning) at test time. We will also extend our discussion of these research directions and stress their differences with ours.
### Hyperparameters
We will make it clearer in the main paper that the hyperparameters of NAP were not tuned. This is only fair as we did not tune them for the baselines either, considering that the ones set by the authors in their respective code repositories were already optimised. Furthermore, we used the same values for all of the experiments, avoiding task-specific hyperparameter tuning that could limit generalisation and applicability of our method. In particular we used an equal weight between the RL loss and the auxiliary loss in all experiments ($\lambda = 1$). An ablation for that particular parameter has been carried out and results are shown in Appendix C.1.
### Extra experiment
We added the NP-EI baseline to the HPO-B experiment as noted by one reviewer, the update plots are in the PDF document added in this rebuttal.
Pdf: /pdf/bb85ecf8632b6e7eb435fc5cf35444d7663ea0cb.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: In this work, they developed the first end-end transformer based architecture for meta Bayesian Optimization and demonstrated empirically state-of-the-art regret minimization on hyperparameter optimization, antibody and chip design problems. They propose using a novel transformer architecture based Neural Process to learn the acquisition function and train it E2E compared to prior work in meta-learning acquisition functions. They identified logarithmic reward sparsity patterns in RL and introduced an auxiliary loss maximizing log-likelyhood of making the correct predictions on their labeled source task datasets as an inductive bias to stabilize training.
Strengths: Demonstrates strong empirical results on their tasks and ablation experiments are included in the appendix demonstrating the importance of their inclusions.
Proposes novel combination of transformer network and E2E learning of the acquisition function for Bayesian optimization with a new formulation and inclusion of auxiliary loss.
Well written and well positioned with existing work.
Weaknesses: Somewhat limited novelty due to prior separately have utilized transformers for Bayesian optimization and other prior work having meta-learned acquisition functions.
Somewhat limited experimental evidence since no hyperparameter search was conducted on the proposed method or baseline. Results were only included on four tasks.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: Did you tune the weighting of the auxiliary loss?
How were the hyperparameters chosen for the implementation if no hyperparameter search was conducted.
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 2 fair
Limitations: Discussed limitations limiting to 5000 BO steps due to their use of the transformer network.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their positive comments and for underlining the clarity of the writing. We thank them for raising interesting points that we will try to explain below.
> Somewhat limited novelty due to prior separately have utilized transformers for Bayesian optimization and other prior work having meta-learned acquisition functions.
As discussed in related work, prior work indeed exists where authors use RL to meta-learn neural acquisition functions [1,2]. However they rely on Gaussian Process models and by consequence their method is not end-to-end trainable. Prior work also exists where authors use transformers but their goal is to meta-learn a surrogate model [3,4] which can in turn be used along with an off-the-shelf acquisition function. Again, these methods do not learn a single architecture that directly predicts acquisition function values from a history of observations. One of our contributions resides in the ability for NAP to be trained end-to-end for acquisition function value prediction. At test time, we show that this end-to-end training allows to achieve better performance, making our contributions significant.
>Results were only included on four tasks.
Regarding the concerns about experimental evidence, we believe that we show the performance of NAP on a panel of experiments, ranging from hyperparameter optimisation in low dimension (HPO-B) to high dimension (MIP) and from continuous spaces (HPO-B) to combinatorial spaces (Antibody, EDA) and even mixed-type spaces (MIP). This gives, in our opinion, a good overview of the practical applications where BO in general can be used and where meta-BO methods such as NAP can also be used. Please also note that the **HPO-B experiment is actually a collection of 6 tasks** (see appendix) and is an established benchmark for meta-learning [5,6].
>How were the hyperparameters chosen for the implementation if no hyperparameter search was conducted.
Thank you for raising the topic of hyperparameters.
We did not perform any hyperparameter tuning on the values provided in the table in appendix an in particular we did not tune the weight between the two losses, which is simply set to 1.0 in all experiments. Moreover, all hyperparameters are fixed across all the experiments for a fair comparison. We set the hyperparameters based on the values from the codebase of MetaBO [2] and PFN [3] because we consider that their values have been optimised by their authors.
Finally, with this single set of hyperparameters, we were able to obtain better results on all experiments, compared to SOTA-baselines (FSBO [6], OptFormer-EI [5], GP-EI).
>Somewhat limited experimental evidence since no hyperparameter search was conducted on the proposed method or baseline.
We do not conduct a full hyperparameter search for the reasons mentioned above, however we conduct an ablation study that can be seen as an analysis on the hyperparameter $\lambda$ giving the relative weights of the two losses in the total loss (see appendix C.1.). If the weight of the auxiliary loss is zero, we have NAP-RL and if the loss of the RL loss is zero, we have NP-EI. We can see that simply with equal weighting ($\lambda = 1$), we achieve better results.
----------
[1] Bing-Jing Hsieh, Ping-Chun Hsieh, and Xi Liu. Reinforced few-shot acquisition function learning for bayesian optimization. In M. Ranzato, A. Beygelzimer, Y. Dauphin, P.S. Liang, and J. Wortman Vaughan, editors, Advances in Neural Information Processing Systems, volume 34, pages 7718–7731. Curran Associates, Inc., 2021.
[2] Michael Volpp, Lukas P. Fröhlich, Kirsten Fischer, Andreas Doerr, Stefan Falkner, Frank Hutter, and Christian Daniel. Meta-learning acquisition functions for transfer learning in bayesian optimization. In 8th International Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020.
[3] Samuel Müller, Noah Hollmann, Sebastian Pineda-Arango, Josif Grabocka, and Frank Hutter. Transformers can do bayesian inference. In The Tenth International Conference on Learning Representations, ICLR 2022, Virtual Event, April 25-29, 2022.
[4] Tung Nguyen and Aditya Grover. Transformer neural processes: Uncertainty-aware meta learning via sequence modeling. In Kamalika Chaudhuri, Stefanie Jegelka, Le Song, Csaba Szepesvári, Gang Niu, and Sivan Sabato, editors, International Conference on Machine Learning, ICML 2022, 17-23 July 2022, Baltimore, Maryland, USA, volume 162 of Proceedings of Machine Learning Research, pages 16569–16594. PMLR, 2022.
[5] Yutian Chen, Xingyou Song, Chansoo Lee, Zi Wang, Richard Zhang, David Dohan, Kazuya Kawakami, Greg Kochanski, Arnaud Doucet, Marc Aurelio Ranzato, Sagi Perel, and Nando de Freitas. Towards learning universal hyperparameter optimizers with transformers. In
S. Koyejo, S. Mohamed, A. Agarwal, D. Belgrave, K. Cho, and A. Oh, editors, Advances in Neural Information Processing Systems, volume 35, pages 32053–32068. Curran Associates, Inc., 2022.
[6] Martin Wistuba and Josif Grabocka. Few-shot bayesian optimization with deep kernel surrogates. In 9th International Conference on Learning Representations, ICLR 2021, Virtual Event, Austria, May 3-7, 2021.
---
Rebuttal Comment 1.1:
Title: Raising Rating to 5
Comment: I would like to thank the authors for responding to my questions and for their clarifications. After reviewing their response and the discussion with other authors I will be raising my Rating to a 5 due to the clarifications on hyperparameter search and E2E differentially. | Summary: By combining a Transformer-based architecture and reinforcement learning objective, this paper proposes the first end-to-end framework for meta-Bayesian optimization.
The proposed framework is claimed to resolve the inefficiency in the existing two-stage approaches.
Strengths: - The meta-Bayesian optimization is an important topic that can be useful in various domains.
- The presented motivation appears to be compelling, while the proposition of employing a Transformer-based architecture seems reasonable.
- The experiments include practical problem settings.
Weaknesses: - I am not sure if RL is really the optimal way to achieve an end-to-end meta-BO framework. I think this paper lacks a justification for why RL is a necessary component.
- Although this paper claims to tackle the inefficiencies from two-stage approaches, introducing RL makes training much harder. The authors add an auxiliary task to solve this issue, but due to this, the proposed method does not seem to reduce the complexity and the objective discrepancy in the previous two-stage approaches.
Please note that I do not have much expertise in the relevant fields.
I currently do not see any major flaw in this work, but I am also not fully convinced.
Comments from other reviewers and further discussion may largely change my score.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: - Is there any factor that necessitates the use of RL?
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their review and for acknowledging the strengths of our work and the relevance of our experiments. We hope that we can further motivate the use of RL and auxiliary loss in our response below.
>Is there any factor that necessitates the use of RL?
To answer your first point, we would like to underline that the necessity of using RL comes from the fact that we are learning acquisition functions. As we do not have true labels for how good it is to select a point (it depnds on the other points seen so far and their values), we cannot rely on supervised learning and thus choose to rely on RL. The NP-EI baseline - trained only using a supervised loss and EI acquisition - shows that there is value in meta-learning the acquisition with RL as NAP outperforms it on every task.
>Although this paper claims to tackle the inefficiencies from two-stage approaches, introducing RL makes training much harder. The authors add an auxiliary task to solve this issue, but due to this, the proposed method does not seem to reduce the complexity and the objective discrepancy in the previous two-stage approaches.
Even in a two-stage approach we would need to train using RL for the second stage. Indeed, we can pretrain a neural process surrogate with supervised loss but to meta-learn a neural acquisition we must introduce RL. We add the auxiliary loss to help tackle the sparsity of the RL reward, which introduces a useful inductive bias. Even though this auxiliary loss is indeed not exactly optimising the same objective as in the downstream task, we argue that (see appendix C.1.) training end-to-end enables NAP to be updated simultaneously from both signals hence reducing the objective discrepancy. We use an end-to-end architecture so we can backpropagate information both from the acquisition (RL) and the auxiliary loss (supervised) through the whole network, which is not possible in a two-stage approach (see Figure 3 in appendix).
---
Rebuttal Comment 1.1:
Comment: Thank you for the response.
I'm satisfied with the response and raising the score from 5 to 6. | null | null | null | null |
TextDiffuser: Diffusion Models as Text Painters | Accept (poster) | Summary: This paper proposed TextDiffuser to achieve accurate text rendering for diffusion models. TextDiffuser generates character-level text layouts to guide the text rendering process (image generation). The model is evaluated on 3 tasks including text-to-image, text-to-image with template, and text inpainting to demonstrate its flexibility and controllability. Additionally, the paper contributes the first large-scale text image dataset with OCR annotations, named MARIO-10M.
Strengths: 1. The paper is well motivated, which tries to solve the problem of generating text image with diffusion model. Also, the paper is well organized and easy to follow.
2. The TextDiffuser model is flexible, which can be adaptive with different conditions as shown in the experiments.
3. A new dataset specifically designed for text rendering is proposed.
Weaknesses: 1. Mismatch of character level layout and the real text. The character level layout is generated from standard fonts (rendered using toolkit such as opencv), however, the real character can be of any style or font, which leads to a mismatch of character level layout. In another word, guided by such a layout, the generated text will tend to have the same character interval and aspect ratio of the standard font, which means it will limit the generation diversity of fonts.
2. Generation diversity has not been considered. There exist several reasonable text styles for the same background image. As the proposed model cannot explicit control the attribute of texts such as color, layout, font, style, etc., it is important to measure the generation diversity of the model.
3. The performance comparison seems to be limited to general text-to-image diffusion models with no specific optimization on text painting. A comparison with some state-of-the-art text generation methods is necessary to show the quality of text generation.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. How does the model perform on more complex characters such as Chinese characters? A simple layout guidance seems to only constrain the position and order of letters, without any specific improvement on the quality of generation details. I wonder if there is any method to improve the adaptiveness on other language?
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Limitations and potential impacts have been discussed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your review and feedback. We aim to address and clarify each point raised.
**Character-Level Layout and Real Character Style:**
The primary role of our character-level layout is to inform and guide the position and content of visual text, without restricting it to specific styles or fonts. This design choice empowers TextDiffuser to dynamically adapt to varied text styles, as the model is trained on the diverse MARIO-10M dataset. To illustrate this, Figure 6 in Appendix I of our supplementary material shows the model interprets a standard font layout and coherently translates it into diverse fonts and styles according to the context.
**Generation Diversity:**
To clarify, TextDiffuser can explicitly control text layouts by conditioning the second stage of image generation on the first stage's prediction or by user-provided layout templates. As outlined in Line 278 and visualized in Figure 15 of Appendix O, even with the same layout input, TextDiffuser exhibits diversity in text attributes such as color, font and style. Furthermore, Figure(b) in the attached PDF provides additional demonstrations of TextDiffuser's diversity and control capability, like color modulation via prompts. We will emphasize this diversity and capability more clearly in the paper.
**Performance Comparison:**
To the best of our knowledge, state-of-the-art text generation methods like Imagen [67], eDiff-i [2], Character-Aware Model [42], and GlyphDraw [45] have not publicly released open-source code, checkpoints or APIs for throughout comparisom, as highlighted in Lines 240-241.
Still, we initiated a comparison with Character-Aware Model [42] and the concurrent work GlyphDraw [45] using samples from their papers. In Figure(d) of the attached PDF, TextDiffuser performs better than these methods. For instance, [42] suffers from misspell issues (e.g., ‘m’ in ‘Chimpanzees’) due to its lack of explicit control and [45] struggles with rendering images containing multiple text lines. We will release our code to facilitate future comparisons.
**Adaptability to Complex Characters, e.g., Chinese:**
TextDiffuser is promising to handle complex characters like Chinese characters by collecting a Chinese version of the MARIO dataset and training a character-level segmentation network for Chinese characters. We highlight the effectiveness of our proposed explicit layout guidance in the quality of generation details due to its strong constraint not only on the position but also the content of each characters. The OCR evaluations in Table 4 quantitatively validate the accuracy of our text rendering. Concurrently, our qualitative experiment in Appendix J shows the drastic degradation in quality without explicit guidance. We believe our design of explicit guidance will benefit more complex characters generation.
We believe that our contributions of comprehensive model, dataset and benchmark are significant for future research and development in this field. We will release code, model, and dataset to advance the research community and broaden the applications of our approach.
---
Rebuttal Comment 1.1:
Comment: The author did not reply to my question and the problems still exist.
---
Reply to Comment 1.1.1:
Title: Detailed Explanation of the Question
Comment: Thank you for reviewing and responding. Here is a more detailed answer to your question:
**Question**
> How does the model perform on more complex characters such as Chinese characters? A simple layout guidance seems to only constrain the position and order of letters, without any specific improvement on the quality of generation details. I wonder if there is any method to improve the adaptiveness on other language?
**Answer**
Currently, TextDiffuser cannot render text in other languages. TextDiffuser is specialized for English text generation due to its training on English-centric datasets and embeddings of segmentation masks. We agree that layout guidance mainly focuses on layout constraints. We can improve the adaptiveness of TextDiffuser for other languages from two aspects:
* **Supporting Multilingual Text Generation**:
- Modeling: Extend the embeddings of segmentation masks and the vocabulary of the text encoder tokenizer to accommodate other languages.
- Data: Use a Multilingual MARIO dataset for training.
* **Improving Complex Character Generation Details**:
- Fine-grained Control Signals: Use fine-grained segmentation masks to explicitly guide glyphs.
- High-Resolution Rendering: Adopt more advanced frameworks like Stable Diffusion 2.1 and DeepFloyd with high-resolution rendering capabilities to improve the generation details of complex characters.
We appreciate your feedback and remain open to any further questions. | Summary: The paper proposes TextDiffusers, a 2-stage model to generate images with text per the input text prompt. TextDiffusers also supports text inpainting. The two stages consist of (1) estimation of layout of keywords (inspired by Layout Transformer) to get character-level segmentation masks and (2) image generation conditioned on the character-level segmentation masks.
The paper also contributes to MARIO-10M, a large data set of image-text pairs along with rich OCR annotations. Benchmark MARIO-Eval with 5414 prompts is created from MARIO-10M and few other existing datasets, in order to evaluate quality of rendering text in the image.
Stage 1 consists of novelty such as encoding of width of keywords. Encoding width of keywords improves IoU, especially for shallower Layout Transformer. Stage 2 introduces character-aware loss and extends the denoising loss to include character level segmentation masks.
The paper contains implementation details for reproducibility, comparison against the states of the art demonstrating large improvements in OCR quality metrics, comprehensive ablations studies, several qualitative samples and limitations.
Strengths: S1) Novel approach to generate images with text
S2) New data set for MARIO-10M along with benchmark MARIO-Eval
S3) Comprehensive experiments demonstrating large improvements against various states of the art on OCR metrics
Weaknesses: W1) While the methodology introduced by the paper seems solid, the main weakness is comparison against strong baseline which was also trained on training split from MARIO-10M. Appendix J alludes to equivalency of fine-tuning of pre-trained model, but Stable Diffusion model is not the best when it comes to producing visual text. It is understandable that official code for Imagen, Parti are not available. Perhaps DeepFloyd could have been fine-tuned on MARIO-10M. From Table 4, DeepFloyd is also better on Fidelity (lower FID score). It is not clear if it would perform better than TextDiffusers with this additional fine-tuning.
W2) Information and comparison of model size and training/inference latency are missing.
W3) Appendix N / Figure 14 in the supplementary material includes a handful of sample generations without text. However, it is not clear if TextDiffusers loses its capability to generate such images in the general sense. Qualitative and Quantitative evaluations to test this are missing.
Technical Quality: 2 fair
Clarity: 4 excellent
Questions for Authors: Please see list of weakness.
Q1) According to Line 188, the maximum length of tokens (L) is limited to 77. Do all captions in the dataset contain <= 77 tokens?
Q2) Related to Q1, DeepFloyd uses T5-XXL which is much more powerful (as observed by Imagen). Although L<=77, do you have insights on the richness of embedding that may give it a higher edge over TextDiffusers?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 4 excellent
Contribution: 2 fair
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the comprehensive review and feedback on our work.
**Comparison with Strong Baselines Fine-tuning with MARIO-10M:**
As mentioned in Appendix K, DeepFloyd is better on Fidelity due to its use of two super-resolution modules to generate higher resolution images (1024×1024) versus our models' (512×512) and a stronger text encoder (T5 versus our CLIP). While the lack of DeepFloyd's publicly available training code and implementation details restricted us from retraining or reimplementing it, we acknowledge the potential benefits of fine-tuning it using MARIO-10M to improve its visual text generation performance. However, we assert that merely changing the dataset cannot improve DeepFloyd over TextDiffuser. We emphasize the importantce of our unique proposal of the explicit supervision for visual text rendering through character-level segmentation masks. As evidenced in our comprehensive ablation studies, this feature significantly improves text rendering accuracy and is missing in DeepFloyd.
**Model Size and Latency Metrics:**
Thank you for highlighting this aspect. Our TextDiffuser builds upon Stable Diffusion 1.5 (859M parameters), adding a Layout Transformer in the first stage (+25M parameters) and modifying the second stage (+0.75M parameters), augmenting it by only about 3% in terms of parameters. It trains within about four days on eight Tesla V100 GPUs, resulting in 6.6 seconds per iteration, with an inference latency of 8.5 seconds per image.
**Capability to Generate General Images:**
We provide qualitative and quantitative evaluations to demonstrate TextDiffuser's generality in generating non-text general images. We compare TextDiffuser with our baseline Stable Diffusion 1.5 as they have the same backbone. For a quantitative evaluation, the FID scores of 5,000 images generated by prompts randomly sampled from MSCOCO are as follows:
| Sampling Steps | Stable Diffusion | TextDiffuser |
|--------|-----------|-------|
| 50 | 26.47 | 27.72 |
| 100 | 27.02 | 27.04 |
For a qualitative evaluation, please refer to Figure(c) in the attached PDF, where we show more comparisons. We can see from the table and figure that TextDiffuser is highly competitive. These evaluations demonstrate that TextDiffuser maintains its capability in the general domain, primarily because our training data encompasses large-scale images from diverse real-world scenes.
**Token Length in Captions:**
A significant majority—99% of the captions in our dataset—are are within the 77 tokens limit.
**Richness of DeepFloyd's T5-XXL Embedding:**
We agree that the T5-XXL encoder, as leveraged by DeepFloyd, holds the potential for richer textual embeddings as we have mentioned in Appendix K. However, as we analyzed in the answer of **Comparison with Strong Baselines Fine-tuning with MARIO-10M**, the absent of explicit visual text layout supervision in DeepFloyd, remains a deciding factor in the performance of visual text rendering.
In conclusion, we're grateful for your recognition of our approach's novelty, the dataset contribution, and the rigor of our experiments. We will revise our paper to clearly emphasize these in light of your instructive comments.
---
Rebuttal Comment 1.1:
Comment: Thanks to authors for answering my questions and clarifying them. I don't have further questions. I have also reviewed all other reviews and satisfactory response from the authors. | Summary: This work presents an approach to enhance the text rendering ability of a text-to-image diffusion model. The authors introduce a new diffusion model called text-diffuser that leverages image captions and text segmentation masks to generate text images. They also collect a large-scale image dataset MARIO-10M, which includes explicit text information such as segmentation masks and OCR annotations. The experimental results indicate a significant improvement in text generation ability but a slight decrease in image quality.
Strengths: * The large-scale text image dataset with text annotations MARIO-10M is commendable is valuable for further research in this domain.
* Incorporating character-level segmentation masks to enhance the text rendering ability of a diffusion model is a novel idea. The results presented in the paper demonstrate the effectiveness.
Weaknesses: * The paper mentions that the layout generation model is trained to generate text segmentation masks, which might not be suitable for scene texts with significant perspective changes. Most of the generation examples lack realistic text styles with perspective changes and complicated text layouts.
* Some GAN-based works [1, 2, 3] exploring scene text editing and a recent diffusion-based scene text editing work [4] could be relevant for part-image generation evaluation. Authors could consider incorporating a discussion and comparison with these related works.
[1] Wu, et al . “Editing Text in the Wild.” https://doi.org/10.1145/3343031.3350929.
[2] Qu, et al. “Exploring Stroke-Level Modifications for Scene Text Editing,” https://ojs.aaai.org/index.php/AAAI/article/view/25305
[3] Krishnan, et al. “TextStyleBrush: Transfer of Text Aesthetics from a Single Example.” http://arxiv.org/abs/2106.08385.
[4] Ji, et al. “Improving Diffusion Models for Scene Text Editing with Dual Encoders.” http://arxiv.org/abs/2304.05568.
Technical Quality: 4 excellent
Clarity: 3 good
Questions for Authors: * It would be interesting if we can control the style of generated texts through language descriptions on textdiffuser. For example, the image s generated with this caption: a bear holding a board with a **red/purple/yellow** "hello world"? It would be beneficial if the authors could provide some generation examples showcasing this capability.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 3 good
Contribution: 3 good
Limitations: Authors discussed the limitation of the trained model, including small text generation and multiple-word generation.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your high recognition of our work, particularly the novelty of our TextDiffuser and the significant contribution of the MARIO-10M dataset. We've responded to your comments as follows:
**Layout generation for scene text with perspective changes:**
Our MARIO-10M dataset reveals that about 90% of the text regions maintain a horizontal orientation with rotation angles smaller than 5 degrees without perspective changes. Hence, our layout generation model is designed to predict horizontal bounding boxes by detecting the coordinates of their left-top and bottom-right points. Adapting our model to predict more realistic scene text is indeed feasible by detecting enhanced coordinates, such as eight coordinates for four points. We'll clarify this adaptability in our revised paper.
**Discussion and comparison of text-editing and part-image generation:**
We appreciate your suggestion of relevant works [1,2,3,4] in the scene text editing domain. We have taken [1] as a representative work to discuss the differences between text editing and text inpainting (part-image generation), as detailed in Footnote 1 and Appendix O. While we have cited [1,2], we will also incorporate references [3,4] in our updated version.
**Control over the style of generated texts through language descriptions:**
We are happy to showcase TextDiffuser's innovative capability in controling style of generated texts through language descriptions. As shown in Figure(b) in the attached PDF, TextDiffuser's successfully generates texts in varying colors aligned with the language descriptors.
We will incorporate these insightful discussions in the final version to inspire future explorations and applications in the field.
---
Rebuttal Comment 1.1:
Comment: Thanks for the response. All my concerns are addressed. It would be beneficial if authors could further improve layout generator in future work. | Summary: This paper introduces a model for generating visually appealing and coherent text within diffusion models. It also presents the MARIO-10M dataset and the MARIO-Eval benchmark for evaluating text rendering quality. Experimental results demonstrate their method's flexibility and controllability in creating high-quality text images and performing text inpainting.
Strengths: The strengths of this paper include the proposal of TextDiffuser, a flexible and controllable framework based on diffusion models, with two stages for layout generation and fine-tuning.
Weaknesses: The paper does not explicitly mention how the model handles the generation of rich-text images when the number of queries is limited. However, it raises concerns about the efficiency of the model when dealing with a large number of queries.
When there is a sequential relationship between boxes in the context of generating rich-text images, the paper does not specifically address how the model handles the ordering of boxes.
While part-image generation seems reasonable, it could be computationally slow when dealing with a large number of texts as it requires predicting numerous positions.
The paper does not mention the specific number of texts per individual image in the mentioned databases in the main paper, which could be important information to provide.
Regarding image resolution, the paper suggests that it can be low when dealing with single texts. However, when generating a large number of texts, high-resolution generation becomes necessary. It is essential for the authors to address how they ensure the accuracy of each generated text and the efficiency of text generation in such cases.
Using sequential generation for layout masks is an interesting point. Have the authors considered directly using diffusion to predict layout masks? It would be beneficial to have more comparative analysis between the two methods.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: See the weakness
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The rationale behind this paper, using seq-2-seq generation, is reasonable. However, there is not a clear explanation of how the generation of rich-text is achieved.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your review and feedback. We've taken care to address each of your concerns below.
**Handling of rich-text images with limited queries & efficiency with a large number of queries:**
Our first stage of Layout Generation leverages an auto-regressive Transformer whose prediction time correlates with the number of queries (keywords). Meanwhile, the second stage of image generation is independent of the number of queries. The time cost of these two stages, based on experiments conducted three times, are shown below:
| #keywords | Layout Generation (s)| Image Generation (s) |
|-----------|--------------|--------------|
| 1 | 1.07±0.03 | 7.12±0.77 |
| 2 | 1.12±0.09 | 7.12±0.77 |
| 4 | 1.23±0.13 | 7.12±0.77 |
| 8 | 1.57±0.12 | 7.12±0.77 |
| 16 | 1.83±0.12 | 7.12±0.77 |
| 32 | 1.95±0.28 | 7.12±0.77 |
| ... | ... | 7.12±0.77 |
As illustrated in the table, our model's efficiency when handling a large number of queries is commendable, primarily due to our use of a two-layer Layout Transformer. What's more, it's desirable to provide a layout template for image generation, which incurs no time cost in model layout prediction.
**Handling ordering of boxes in rich-text images:**
Our Layout Generation process considers text keywords in the order provided by the user and outputs their box positions accordingly. While the box generation is sequential, the resulting image box order is spatial (two-dimensional). This spatial box ordering is learnt by large-scale real-world images (e.g., MARIO-10M) through our Layout Transformer in the context of rich-text images.
**Computational efficiency in part-image generation with abundant text:**
To clarify, our part-image generation occurs during the second stage of image generation, which is guided by a predefined text position mask. Thus, there's no added time cost associated with position prediction. For efficiency regarding a large number of queries in the first Layout Generation stage, plese see our first response point above.
**Specific number of texts per image in databases:**
We appreciate your suggestion. In our revised paper, we will integrate the following table detailing the word count per image in our proposed MARIO-10M dataset:
| #Words | #Images | Ratio |
|--------|-----------|-------|
| 1 | 592,153 | 5.9% |
| 2 | 1,148,481 | 11.5% |
| 3 | 1,508,185 | 15.1% |
| 4 | 1,610,056 | 16.1% |
| 5 | 1,549,852 | 15.5% |
| 6 | 1,430,750 | 14.3% |
| 7 | 1,229,714 | 12.3% |
| 8 | 930,809 | 9.3% |
We can see from the table that most images contain between 3 to 6 words, while images with one word or eight words are less frequent.
**The Accuracy and Efficiency of High-resolution Image Generation:**
We agree that high-resolution generation enhances the quality of rich-text and small-text image generation. Our TextDiffuser can seamlessly transition to high-resolution image generation by deploying a more advanced diffusion backbone. For instance, when we augment the image resolution from 512x512 to 768x768 using Stable Diffusion 2.1 (instead of 1.5), the latent space resolution also increases from 64x64 to 96x96, enhancing our character-level representation. Please refer to Figure(a) in the attached PDF for a visual demonstration of the *accuracy* of our generated text. The inference latency for 512x512 and 768x768 resolutions are 8.5s and 12.0s respectively with a batch size of 1. Thanks to our employment of an *efficient latent* diffusion model, the time increment is insignificant.
**Direct diffusion prediction for layout masks versus sequential generation:**
In our initial investigations, we explored diffusion models for layout generation. However, we encountered overlapping problems during mask generation for *consecutive characters* and *multiple lines of text*, as illustrated in "Figure(e) in the attached PDF". This overlap stems from the non-autoregressive nature of diffusion models, which generate all text simultaneously. To address this challenge, we turned to the auto-regressive Transformer for layout generation.
We believe that our contributions, including the TextDiffuser model, the MARIO-10M dataset and the MARIO-Eval benchmark, push the boundaries of current text image generation. We appreciate your constructive feedback and hope these clarifications address your concerns. We will include these discussions in our revised paper.
---
Rebuttal Comment 1.1:
Comment: I appreciate the authors' efforts in answering my questions. I am generally accepting of the current response, and I'd like to raise my rating. | Rebuttal 1:
Rebuttal: Thank you for taking the time to review. Enclosed in the attached PDF, we have provided some figures for reference.
* Figure(a): Pre-trained on high-resolution Stable Diffusion 2.1 significantly enhances the legibility of small text.
* Figure(b): Demonstration of using language descriptions to control the style of text.
* Figure(c): Visualizations of general images generated by Stable Diffusion 1.5 and the proposed TextDiffuser.
* Figure(d): Comparison with Character-Aware Model and the concurrent GlyphDraw. The code is not available
for Character-Aware Model, the checkpoints and datasets are not available for GlyphDraw. So we compare the samples in their papers.
* Figure(e): Layout generation using diffusion model and Transformer.
Pdf: /pdf/eb2f81d7277140d25d099649dc4e9e19a70b2084.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Prompt Pre-Training with Twenty-Thousand Classes for Open-Vocabulary Visual Recognition | Accept (poster) | Summary: This paper proposed a vision-language pre-training method called POMP. It aims to solve the GPU memory problem of the existing vision-language pre-training method like CoOp when the class number is extremely huge. POMP is composed of local contrast and local correction strategies. Local contrast is to sample a few classes during training to save the GPU memory, while local correction is to alleviate the sampling bias problem. The authors provide very comprehensive experiments to show the effectiveness of the proposed pre-training method.
Strengths: 1. The proposed local contrast strategy is reasonable, easy to implement, and effective when the class number is huge.
2. The experiments are very comprehensive, and the code is also provided. So this work could provide valuable contributions to the open-source community.
Weaknesses: The proposed local correction strategy is more like a general approach in the contrastive learning method. It encourages the positive samples to be much closer and pushes negative samples away for a more stringent decision boundary. So the reason why it works is that the features are more uniformly distributed in the feature space, which is supported by Fig. 5. Therefore, I do not think that the authors' claim that this strategy solves the sampling bias problem is proper, since the sampling bias still exists. Therefore, I encourage the authors to rethink the motivation and reason of the contrast correction strategy.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: None.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: None.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your positive assessment and insightful comments.
>**Q1: The claim that the local correction strategy solves the sampling bias problem is questionable.**
A1: Thanks for your feedback. We agree that sampling bias still exists with our approach. However, our intention is to convey that the local correction helps mitigate negative effects stemming from sampling bias. As we describe in Section 3.2.2, class sampling causes the absence of some negative classes, which further leads to a more loose decision boundary and less-uniform class feature distribution.
To alleviate this problem, our local correction strategy adds a margin term to create space for unsampled classes in the feature space. This encourages more discriminative representations even without observing all negative classes, which reduces the uniform loss from -1.05 to -1.37 and improves accuracy from 65.8 to 67.0 on ImageNet-21k (Fig.5). We will refine our statement of the local correction strategy in the final version.
---
Rebuttal Comment 1.1:
Title: Response to the rebuttal
Comment: Thanks for the clarification. My problem is solved and I keep my rating. | Summary: * This paper present a prompt pre-training method for vision-language models named as POMP, which can be transfer to visual recognition tasks including image classification, semantic segmentation, and object detection.
* POMP follow the prompt tuning setting of CoOp. The authors claims that pre-training prompts on ImageNet-21K requires over 300 GB GPU memory with traditional methods like CoOp. To make it work and more efficient on ImageNet-21K, POMP propose a positive and negative class sampling to reduce gpu memory cost.
* POMP is prompted learning on ImageNet-21K dataset, also is also a task-agnostic prompt pre-training method, which can be directly used in downstream methods.
Strengths: * The paper is well organized, the framework is simple.
* The authors have conducted a lot of experiments on downstream tasks.
* The design of local contrast and local correction is memory efficient for prompt tuning.
Weaknesses: * The peformance of open vocabulary object detection & instance is really not high compared with detic(24.9 -> 25.2), even though POMP is prompt tuning on ImageNet21K dataset. The observation can be also validated in Table 6(Cross-dataset evaluation for object detection), For me, it seems that POMP don't work on detection benchmark.
* On detection experiments, POMP is based on detic, however, detic and POMP is not strictly open vocabulary method, which means that the novel class names is known in training phase. For standard open vocabulary tasks, the novel class can not be seem or trained, so the novel class can not be prompt tuned in this setting, How does POMP cope with this problem ?
* For me the novelty and the technical contribution of this paper is limited.
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: please refer to weakness
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 1 poor
Limitations: None
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your insightful comments. We humbly think that some concerns are caused by misunderstandings, which we will explain in detail below. We hope that our response can clarify the misunderstandings so that you can consider our work more favorably.
>**Q1: The performance gain on the object detection tasks is limited.**
A1: We note that the +0.3 mAPr gain on open-vocabulary LVIS is meaningful since Detic is a very strong baseline which uses extra ImageNet-21K data for detector pre-training. More importantly, on cross-dataset detection (Table 6), our POMP shows sizable gains of +1.9 AP50 on COCO and +0.8 AP50 on Object365.
We would like to reiterate our main contribution that our method enables large-scale prompt learning for the first time, **rather than optimize detection scores**. We pre-train a universal prompt on ImageNet-21K that transfers broadly, achieving SOTA on 21 downstream tasks in zero-shot settings: the consistent gains across classification, detection, and segmentation tasks have offered strong evidence of its effectiveness.
>**Q2: POMP takes Detic as backbone model for detection experiments. However, Detic is not a strictly open-vocabulary method for LVIS, how does POMP cope with this problem?**
A2: Thanks for your feedback. We want to clarify our rationale for choosing the Detic backbone and experimental setup, and demonstrate that our improvement is consistent across various settings.
1. For the open-vocabulary LVIS task, Detic utilizes both box-supervised data from LVIS-base (866 classes) as well as image-supervised data from ImageNet-21K that overlaps with LVIS (997 classes, 277 of which are novel classes). This allows Detic to demonstrate transfer not only from base to novel classes, but also from image-level to box-level recognition. Since Detic is the closest existing method to ours that leverages ImageNet-21K, we chose it as a strong baseline and followed its setup for fair comparison.
2. To evaluate Detic and POMP more strictly on open-vocabulary LVIS, we conduct additional experiments training only on LVIS-base 866 classes using box-supervised data alone (denoted as **LVIS-base**. In contrast, Detic’s original setting is denoted as **LVIS-base & IN-L**). The results are as follow:
| Method | Source training data | mAPr |
|:-------:|:---------------:|:-----:|
| Detic | LVIS-base | 16.4 |
| POMP | LVIS-base | **17.4** |
| Detic | LVIS-base & IN-L | 24.9 |
| POMP | LVIS-base & IN-L | **25.2** |
As shown, our method still shows consistent improvements over Detic, demonstrating the general efficacy of our approach for open-vocabulary detection. We would be happy to include these additional experiments in the final paper. Please let us know if you would like any clarification or have additional suggestions.
3. Additionally, we would like to highlight that our **cross-dataset experiments** (Table 6), training on LVIS-full and evaluating on COCO and Object365, **follow a strictly open-vocabulary protocol**. As we reponsed in A1, our POMP achieves SOTA performance in these experiments as well, with gains of +1.9 AP50 on COCO and +0.8 AP50 on Object365 compared to prior art. In conclusion, we have provided strong evidence through additional experiments and analyses that our method advances the state-of-the-art in open-vocabulary detection across multiple datasets and protocols.
>**Q3: The novelty and technical contribution are limited.**
A3: We appreciate your feedback and want to clarify that our work makes several key innovations. First, our proposed methods of local contrast and local correction are novel techniques for efficient prompt learning at scale. More importantly, it is the combination and application of these techniques that enables prompt learning on large-scale visual concepts (e.g. ImageNet-21k) for the first time. This is a major contribution given that previous prompt tuning methods cannot feasibly scale to such a large number of concepts due to their computational complexity.
By pre-training a universal prompt on ImageNet-21K, we obtain a prompt representation that transfers broadly and achieves SOTA results on 21 downstream tasks spanning classification, detection, and segmentation. We hope that our responses can clarify the misunderstandings and you will consider increasing your scores after seeing our responses. We'd appreciate any references or literature you can suggest, as we aim to cite them in our work.
---
Rebuttal Comment 1.1:
Comment: I agree that POMP enables large-scale prompt learning for the first time, i will rise my rating.
---
Rebuttal 2:
Title: Kind Reminder on Your Questions and Our Response
Comment: Dear Reviewer,
Thanks again for your efforts in reviewing our paper.
We provided our response one week ago. Does it address your questions? We are more than happy to answer any further questions.
Thanks!
---
Rebuttal 3:
Title: Reminder from AC
Comment: Dear Reviewer
Could you read through the rebuttal and check if you have more questions / concerns ?
Best,
AC | Summary: This paper proposes POMP, a prompt pre-training method for pre-trained vision-language models like CLIP. While existing prompt-tuning approaches usually fine-tune the soft prompts on a specific downstream dataset with a limited number of classes, the proposed POMP conducts prompt "pre-training" on a large dataset (i.e., ImageNet-21K) with much more classes. POMP consists of two strategies: local contrast and local correction -- the former makes the memory footprint affordable with common GPUs, while the latter reduces the bias caused by the class sampling. Experiments show that POMP outperforms previous prompt-tuning methods across various visual recognition tasks and datasets.
Strengths: - Extending prompt-tuning of vision-language models from task-specific datasets to a large "pre-training" dataset (e.g., the ImageNet-21K in this paper) is a promising direction, as existing prompt-tuning methods demonstrate poor cross-dataset generalization ability and always require per-task training.
- The proposed method is simple and has natural intuitions. The local contrast strategy makes the prompt tuning feasible with a large number of classes, but the aggressive class sampling introduces potential bias. Thus, the local correction is further proposed to alleviate the negative impact of the sampling bias. The definition of the adaptive margin m is clever and provides good insights.
- The experiments of this paper have wide coverage, which includes cross-dataset and cross-domain image classification, open-vocabulary object detection, and open-vocabulary semantic segmentation. The results can provide good references for future works in prompt tuning. Furthermore, the proposed POMP shows superior performance across various tasks and datasets, establishing a new state of the art for prompt tuning.
- The code of the proposed method and all baselines are included in the supplementary material.
Weaknesses: I don't find significant flaws in this paper. There are some minor suggestions:
(1). Fig.1 looks a bit visually exaggerated: POMP only marginally improves the previous SOTA on some tasks and datasets, while the non-uniform scale in Fig.1 has a potentially misleading effect.
(2). Investigating the embedding space of POMP from the perspective of alignment and uniformity is interesting, but it is hard for me to parse "Class features of POMP have better alignment with centroids of the corresponding images, and are distributed with better uniformity" from Fig.6. Maybe it would be better to also add the concrete values of the alignment and uniformity loss.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: This paper makes solid contributions to the prompt tuning of pre-trained vision-language models. The proposed method is simple and well-motivated, and shows superior performance on a wide range of visual recognition tasks and datasets. The presentation of the paper is clear, with easy-to-understand figures and equations. Some minor suggestions are listed in the weaknesses section.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The limitations are discussed in the appendix of the paper. However, there is no discussions about the potential social impact, although the corresponding entry is checked in the checklist.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your positive assessment and constructive feedback.
>**Q1: The non-uniform scale in Fig.1 has a potentially misleading effect.**
A1: Thanks. Considering the tasks covered in the radar chart (Fig.1) have different difficulties and use different metrics (e.g., acc for classification, hIoU for segmentation), we follow previous work [1,2] to use independent scale for each task. To reduce potential misleading, we have updated the figure to use a consistent scale of 0.46 for all tasks except “Semantic Segmentation (Open-vocab PASCAL VOC)”. Given the large performance gain (+6.9) and range for this task, we have removed it from the figure to avoid compressing the other tasks. Please refer to our attached pdf.
[1] CoCa: Contrastive Captioners are Image-Text Foundation Models. Yu et al. TMLR 2022.
[2] Image as a Foreign Language: BEiT Pretraining for All Vision and Vision-Language Tasks. Wang et al. CVPR 2023.
> **Q2: Avoid potential confusion in Fig. 6.**
A2: Thanks for your suggestions. In Fig.6, our intention is to convey that:
1. Compared to light dashed lines (CoOp), the end points of solid lines (POMP) is closer to centroids of the points (image features), indicating that “class features of POMP have better alignment with centroids of the corresponding images''.
2. Compared to light dash-dot lines (CoOp), the angle between solid lines (POMP) is larger, indicating that “class features of POMP are distributed with better uniformity”.
As you said, adding concrete values of the alignment and uniformity loss is a good solution. For Aircraft, POMP reduces the alignment loss $\ell_{\text{align}}$ from 1.39 to 1.36 and the uniform loss $\ell_{\text{uniform}}$ from -0.66 to -0.81. For UCF101, POMP reduces $\ell_{\text{align}}$ from 1.41 to 1.36 and $\ell_{\text{uniform}}$ from -0.95 to -1.23. We have updated the figure in our final version, please refer to our attached pdf.
---
Rebuttal Comment 1.1:
Comment: Thanks for the updated figures, I have no further questions and will keep my initial rating.
---
Rebuttal 2:
Comment: Dear Reviewer
Could you read through the rebuttal and check if you have more questions / concerns ?
Best,
AC | Summary: This paper introduce POMP, a prompt pre-training method for vision-language models. POMP learns a universal soft prompt that can express a large number of visual concepts and transfer to various visual recognition tasks in a zero-shot manner. POMP uses local contrast and local correction strategies to reduce the training cost and improve the generalization of the prompt. The paper shows that POMP outperforms previous state-of-the-art methods on several datasets and tasks, such as image classification, semantic segmentation, and object detection. The paper also analyzes the feature space of POMP and demonstrates its alignment and uniformity properties.
Strengths: 1. It proposes a novel and efficient method to pre-train a soft prompt on a large-scale dataset with over twenty-thousand classes, which enables the prompt to capture rich semantic information for visual recognition.
2. It introduces local contrast and local correction techniques to reduce the computational and memory overhead of prompt tuning
3. It achieves state-of-the-art performance on 21 datasets for various vision tasks, such as image classification, semantic segmentation, and object detection.
Weaknesses: 1. The training process for Stage 1 is similar to that of CLIP; however, CLIP utilizes a significantly larger pre-training dataset compared to ImageNet 22k. Therefore, it is important to clarify why the CLIP model cannot directly excel at these downstream tasks and why tuning it on ImageNet 22k could yield better results. Currently, it is unclear whether the performance improvement is primarily attributed to ImageNet 22k or the proposed prompt pre-training methods.
2. Figure 2 may inadvertently create misconceptions by suggesting that training solely on ImageNet 22k enables zero-shot transfer to tasks such as object detection and segmentation. In reality, as explained in the supplementary material, after completing Stage 1 training, an additional round of pre-training on the source data specific to object detection and segmentation is necessary. To avoid confusion, it is essential to provide explicit details regarding this additional training process.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: See weaknesses.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: See weaknesses.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your thoughtful feedback. We hope that our response can address your concerns and you can consider our work more favorably.
>**Q1: POMP’s pre-training process for stage 1 is similar to that of CLIP, why can't CLIP directly excel at downstream tasks compared to POMP, and why does tuning on ImageNet-22K yield better results?**
A1: While POMP and CLIP both pre-train on image-text pairs, **there is a key difference in the text formatting.** CLIP uses paired captions, while POMP uses a "soft-prompt + [CLASSNAME]" format. This leads to two issues for CLIP when transferring to downstream tasks:
1. **CLIP requires extensive prompt engineering**, crafting hard prompts like "a photo of a [CLASSNAME]" for synthesizing classification weights. But hard prompts are unstable, as addressed in previous studies [1], small wording tweaks can drastically impact performance. For example, for the Caltech101 dataset, changing the prompt from “a photo of a [CLASS]” to “a photo of [CLASSNAME]” causes more than 5% decrease in accuracy.
2. **Generic prompts of CLIP lack task-relevant context**. For fine-grained/long-tailed datasets, prompts without contextual clues achieve lower accuracy. For example, for the EuroSAT dataset, the prompt of “a photo of a [CLASSNAME]” achieves 13% less accuracy compared to “a satellite photo of [CLASSNAME]”.
In contrast, by pre-training a universal soft prompt on ImageNet-22K, POMP adapts better to downstream tasks without prompt engineering. The pre-trained prompt encodes broad coverage of visual concepts, providing more expressive context (see Line 252-254). This allows POMP to outperform CLIP on downstream transfer without the need for extensive fine-tuning.
[1] Learning to Prompt for Vision-Language Models. Zhou et al. IJCV 2022.
>**Q2: Avoid potential confusion in Fig.2**
A2: Thanks for your suggestion. As you note, for detection and segmentation tasks, the region proposal and mask proposal networks need to be pre-trained on detection and segmentation data, respectively, and the hard prompt (e.g., “a photo of a”) should be replaced with our POMP prompt. We have updated the caption of Fig.2 with these notes in our final revision (see our attached pdf).
---
Rebuttal Comment 1.1:
Comment: The response has addressed my concerns. I keep the positive rating. | Rebuttal 1:
Rebuttal: We want to thank all the reviewers for the positive assessment and insightful comments, which helped improve the quality of our work. We have revised our paper accordingly and provided individual responses to each reviewer. Please find attached a PDF outlining the main changes:
1. Updated Fig.1 with a consistent scale.
2. Clarified in Fig. 2 caption that the region/mask proposal networks require pre-training on detection/segmentation source data, as suggested.
3. Added concrete values of the alignment and uniformity loss in Fig. 6.
We hope that our responses can clarify the misunderstandings and help the reviewers consider our work more favorably.
Pdf: /pdf/c0958422738f6af14a1cf9e71d727352c5615502.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
DESSERT: An Efficient Algorithm for Vector Set Search with Vector Set Queries | Accept (poster) | Summary: The authors study the general case of multi-vector retrieval, i.e., ColBERT and beyond. They propose and analyze a new algorithm for this "vector set" search task, with theoretical guarantees. When integrated into ColBERT, the proposed DESSERT method is 2-5x faster at some, relatively small loss in quality.
Strengths: 1. The authors discuss a general class of "vector set" retrieval scoring functions (Sec 1.1), generalizing ColBERT.
2. The authors formalize the "vector set" search problem, perhaps for the first time (although I'd be interested in a comparison with Luan et al. 2021 "Sparse, Dense, and Attentional Representations for Text Retrieval" for completeness). They propose a new algorithm, unlike ColBERT's. The algorithm provides theoretical guarantees.
3. The proposed method is shown to be ~2pt worse than a recent state-of-the-art ColBERT method (PLAID) on MRR quality, while being 3x faster, applying search in just 15.5 milliseconds per query on CPU. This is on the Pareto frontier with respect to the ColBERT retrieval quality/latency curve.
Weaknesses: DESSERT is primarily tested on the MS MARCO query set (besides a small synthetic experiment). The authors justify much of this by citing PLAID [35], but that other paper like many IR papers in the last 1.5-2 years tests on several datasets, including especially out-of-domain and larger datasets. It's not inherently clear how much quality loss DESSERT would incur out-of-domain, say, on the LoTTE or BEIR (or even a subset of one of them, if computational resources do not permit more tests). This is the main weakness in my opinion.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: - How much space does the full method consume? I see 15GB are consumed by the hash table. Is that all?
- How long does preparing the search data structures (indexing) take?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: See weaknesses.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: *>DESSERT is primarily tested on the MS MARCO query set (besides a small synthetic experiment). The authors justify much of this by citing PLAID [35], but that other paper like many IR papers in the last 1.5-2 years tests on several datasets, including especially out-of-domain and larger datasets. It's not inherently clear how much quality loss DESSERT would incur out-of-domain, say, on the LoTTE or BEIR (or even a subset of one of them, if computational resources do not permit more tests). This is the main weakness in my opinion.*
Thank you for the suggestion. We have run additional experiments on the LoTTE dataset. We find that DESSERT performs well in the out-of-domain setting. See the rebuttal PDF for full pareto tradeoffs on ten of the LoTTE datasets.
Thank you for the reference to the work of Luan et al. We will take a deeper look and add this to our discussion of related work.
__Questions:__
__Q1: How much space does the full method consume? I see 15GB are consumed by the hash table. Is that all?__
Yes, 15 GB is the space required by the full method. In general, DESSERT tables are an order of magnitude smaller than the (non-quantized) full embedding collection. For example, in our LoTTE experiments, the vast majority of configurations were under 4 GB.
__Q2: How long does preparing the search data structures (indexing) take?__
This is highly hardware-dependent. If a GPU (or accelerator) is available to cluster the centroids (same process used by PLAID), then DESSERT runs in a fraction of the embedding time. For example, MS-MARCO took several hours to embed on GPU, and then 2.5 additional hours to build the DESSERT index. We conducted a new set of experiments on LoTTE (using different CPU hardware), and here DESSERT took about 15 minutes to index 1.5M sets. The key takeaway is that DESSERT indexing adds only a small overhead to the existing embedding costs.
If our response has addressed the issues raised in your review, we hope that you will consider raising the score.
---
Rebuttal Comment 1.1:
Comment: Thank you for the response. I might be open to raising my score +1 to a 7 (accept) but I think the work needs a more crisp statement of where it's uniquely favorable compared to PLAID ColBERTv2 or ColBERTer. I can already see that the index is quite small (15GB) and that the method is quite controllable to trade away some quality for latency, in a graceful way.
Would DESSERT in principle work well at 10x or 50x the MS MARCO current scale? Would you expect it to scale better or the same as other methods? How would latency vs storage scale?
---
Reply to Comment 1.1.1:
Title: Concise Summary of DESSERT Advantages
Comment: Thank you for the suggestion. We agree that it would be helpful to summarize the unique advantages of our method over other alternatives in a clear, concise way. Please see our proposed statement below:
The primary advantage of DESSERT over previously proposed methods such as PLAID and ColBERTer is improved latency. DESSERT requires less time than competing methods to compare each query-target set pair. In practice, a primary driver of performance is the fact that DESSERT performs set-to-set comparisons with binary/integer operations as opposed to floating point multiplications/inner products. Thus, the latency advantages of DESSERT will persist even as we scale 10x or 50x. Finally, since the DESSERT index is low-memory (each set representation is smaller than the corresponding representation in ColBERT/PLAID), we can scale by an order-of-magnitude while staying within the RAM budget of a typical workstation. | Summary: In this paper, the authors studied a new problem of vector set search with vector set queries. They have formalized this problem and proposed a novel, provable hashing scheme, DESSERT, to efficiently deal with this problem. Moreover, they have provided theoretical analysis for DESSERT and conducted experiments to validate its efficiency and accuracy.
Strengths: - The motivation to study this new problem is clear, which is a vital sub-problem in semantic search, and the authors formalize a general definition of this problem.
- They design an efficient hashing to solve this challenging problem and provide a theoretical bound and query time complexity.
- The presentation is clear and easy to follow.
Weaknesses: ***Novelty and Contributions***
- I appreciate the authors have provided the theoretical bound and query time complexity for their proposed hashing scheme. One of my concerns is that the improvement of the query time complexity is only marginal compared to the brute force method, which is still O(N) in general. Moreover, the impact of L is less discussed either in the theory part or in the experiments, making the efficiency improvement less promising.
***Experiments***
- Another primary concern is the experiments. The experimental results are too few. They only validate the performance of DESSERT on a single real-world dataset, making their conclusion of DESSERT compared to PLAID less convincing. Moreover, no ablation study and parameter study is performed, making users hard to know the effectiveness of implementation tricks and how to set the hyperparameters of DESSERT. Please see Q1-Q4.
***Presentation***
- The paper still needs some careful proofreading. There are some minor issues in Algorithm 2. Please refer to Q5 for more details.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: ***Experiments***
Q1: For the passage retrieval experiments, it is less convincing to me to use a single dataset for validation. Can they conduct and report results on at least one more dataset (for example, TREC Complex Answer Retrieval dataset [1] used in ColBERT)?
Q2: The metric R@1000 is too weak as an accuracy measure. Can they report recall together with R@1000?
Q3: What are the best hyperparameter settings of C, L, probe, and k for the results they show in Table 1? Moreover, can they conduct the parameter study for some vital parameters (e.g., L and C) of DESSERT?
Q4: In Section 5, the authors developed some implementation tricks for DESSERT. Can they conduct an ablation study to validate the effectiveness of the three tricks they proposed (i.e., filtering by k-means clustering, space optimized sketches, and the concatenation trick)?
***Presentations***
Q5: There exist some typos in Algorithm 2:
- In line 2: $G\circ H(G,S_i)$ -> $A_1 \circ A_2(Q,S_i)$;
- In line 4: $f_k(q)$ -> $f_L(q)$;
- In line 11: $count_{m_q}/L$ -> ${count}_{m_i}/L$;
- In line 13: $0,\cdots,N-1$ -> $1,\cdots,N$.
***Reference***
[1] Laura Dietz, Manisha Verma, Filip Radlinski, and Nick Craswell. 2017. TREC
Complex Answer Retrieval Overview. In TREC.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: This work does not appear to have any negative social impact.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: *>I appreciate the authors have provided the theoretical bound and query time complexity for their proposed hashing scheme. One of my concerns is that the improvement of the query time complexity is only marginal compared to the brute force method, which is still O(N) in general. Moreover, the impact of L is less discussed either in the theory part or in the experiments, making the efficiency improvement less promising.*
Thank you for pointing this out. In the revision, we plan to include better descriptions of these parameters (see also our response to Reviewer hUSH). In short:
L is the number of hash functions used to approximate the similarity. L is a design parameter - higher values of L require a larger index size and query time but result in a smaller error when estimating the similarity scores. This can be seen from Lemma 4.2.3: for a fixed failure probability $\delta$, the estimation error decays with $\frac{1}{\sqrt{L}}$.
Regarding the asymptotic complexity: This is true, but we want to emphasize that the asymptotic comparison does not fully capture the advantages of our method. The brute force method requires O(N) distance calculations, which typically require many floating-point multiplications. The most expensive asymptotic term in our method comes from integer comparisons and increments, which are substantially faster on hardware.
__Experiments__
We conducted a hyperparameter study, which we described on Line 301. We have added a more complete description of this study to the appendix. We have also added an evaluation on ten more datasets from the LoTTE benchmark.
__Questions__
__Q1: For the passage retrieval experiments, it is less convincing to me to use a single dataset for validation. Can they conduct and report results on at least one more dataset (for example, TREC Complex Answer Retrieval dataset [1] used in ColBERT)?__
We have added an evaluation on ten of the LoTTE datasets, used in ColBERTv2. Our method shows extremely competitive Pareto-frontier performance. See the rebuttal PDF for full pareto tradeoffs on ten of the LoTTE datasets.
__Q2: The metric R@1000 is too weak as an accuracy measure. Can they report recall together with R@1000?__
R@1000 is the standard evaluation metric for the MS-MARCO evaluation task. However, we also report MRR@10 (see Table 1), which is a much harder metric to improve.
In our LoTTE experiments, we report a variety of metrics (from R1@10 to R1@1000) using the evaluation code from the original paper.
__Q3: What are the best hyperparameter settings of C, L, probe, and k for the results they show in Table 1? Moreover, can they conduct the parameter study for some vital parameters (e.g., L and C) of DESSERT?__
We conducted this study, via grid search over the values described in the paper (see Line 301). The results of this study are listed below. We have added this to the appendix.
Hyperparameter settings:
Settings for DESSERT corresponding to the first row in table optimized for returning 10 docs:
- Hashes_per_table (C) = 7
- Num_tables (L) = 32
- initial_filter_k = 4096
- nprobe_query = 1
Second row:
- hashes_per_table = 7
- num_tables = 64
- initial_filter_k = 4096
- nprobe_query = 2
Settings for DESSERT corresponding to the first row in table optimized for returning 1000 docs
- hashes_per_table = 6
- num_tables = 32
- initial_filter_k = 8192
- nprobe_query = 4
Second row:
- hashes_per_table = 7
- num_tables = 32
- initial_filter_k = 16384
- nprobe_query = 4
Intuitively, these parameter settings make sense: increase k and the number of total hashes for higher accuracy, and increase k for returning more documents (1000 vs. 10).
__Q4: In Section 5, the authors developed some implementation tricks for DESSERT. Can they conduct an ablation study to validate the effectiveness of the three tricks they proposed (i.e., filtering by k-means clustering, space optimized sketches, and the concatenation trick)?__
Unfortunately, all of these tricks are strictly required to have a practical index. Without the pre-filtering, the indexing process requires > 1 day to run. The concatenation trick provides a similar speedup. Finally, the process runs out of memory if we do not use space-optimized sketches, so this evaluation is not possible. We will make a note of this in the paper.
If our response has addressed the issues raised in your review, we hope that you will consider raising the score.
---
Rebuttal Comment 1.1:
Title: Reply to Rebuttal
Comment: Thank you for the rebuttal.
The authors have addressed all of my concerns. I am happy to see that much more datasets have been included in this manuscript, and the results make this work more convincing. I will raise my score to 6. Thanks! | Summary: The paper studies the vector set search problem, which is an extension of the canonical near-neighbor search problem and finds an application in the semantic search task. The paper claims that existing methods for vector set search are unacceptably slow, and thus, proposes an approximate search algorithm called DESSERT. The paper presents both theoretical analysis and empirical evaluation to demonstrate the effectiveness of the DESSERT algorithm.
Strengths: S1. The paper proposes a new algorithm, DESSERT, to solve the vector set search problem efficiently.
S2. The paper presents both theoretical analysis and empirical evaluation to testify the effectiveness of the DESSERT algorithm.
S3. The DESSERT algorithm can be applied to the semantic search task.
Weaknesses: W1. [Motivations & Contributions] The paper is the first to formally formulate the vector set search problem (if not, then citations are required to be provided to the first work), yet only providing one concrete application of the vector set search problem (i.e., semnatic search). The reasons why we need to study the vector set search problem require further clarified. It seems to me that the DESSERT algorithm is actually an incremental work, with a specific focus for improving the running time of the ColBERT model. This severely limits the paper's contributions.
W2. [Experiments] According to the paper, the proposed algorithm, DESSERT, is implemented in C++. However, the baseline method, the brute force algorithm, is implemented in Python using the PyTorch library. The reasonability and fairness of implementing two methods in different programming languages need further explanations.
W3. [Presentations] The paper's presentation needs improvement. There exist several typos and undefined notations in the paper. Some concrete examples are given as follows.
- Line 139: I believe the notation H refers to the hash set, which requires further clarification.
- Line 30: citations are required to the literature on traditional single-vector near-neighbor search.
- Algorithm 2: the output given in the pseudo code doesn't concur with the desciptions given in the main text (though they may refer to the same variable).
---
After the rebuttal process, most of my concerns mentioned above have been sufficiently addressed. Thus, I raise my rating from 4 to 5.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: Please refer to the Weaknesses given above, and provide further explanations, particularly for W1 and W2.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 2 fair
Limitations: I do not see any potential negative societal impact in this paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: W1: Yes, we believe that we are the first to formalize the vector set search problem. While semantic search is the clear “killer application” today, we provide references to other potential applications such as database lineage tracking (line 96), image instance retrieval, market basket analysis, and graph neural networks (line 342). We focus on semantic search in our empirical evaluations because it is a core routine for various tasks (web search, product search, recommendation ranking, and question-answering). We hope that our general formulation of the problem will encourage future work in this area, especially since we have shown that search is tractable for a large class of similarity functions.
Secondly, we respectfully disagree with the claim that accelerating ColBERT is an incremental contribution. ColBERT has had a tremendous impact on both research and practice. The “late interaction” modeling approach introduced by ColBERT represents the state of the art in numerous IR benchmarks such as passage retrieval [1], multi-hop reasoning [2], open question answering [3], and even broader retrieval problems such as table QA [4]. Given the widespread interest and adoption of ColBERT and the importance of achieving low latency inference on cost-effective hardware in industry settings [5] , we believe that our work has the potential for substantial real-world impact.
[1] https://arxiv.org/abs/2205.09707
[2] https://arxiv.org/abs/2101.00436
[3] https://arxiv.org/abs/2007.00814
[4] https://aclanthology.org/2023.acl-short.133/
[5] https://arxiv.org/abs/2101.09086
W2: PyTorch is not a pure python library and heavily calls out to optimized C++ and CUDA kernels for tensor operations. Similarly, we implement DESSERT with a C++ backend and invoke the algorithm via a python interface (please see our code provided in the supplementary materials for further details). Thus, this is an apples-to-apples comparison because both PyTorch and our codebase are Python binding-based C++ libraries.
W3:
- Line 139: This is the set of functions from which we draw the LSH function. It is defined in line 126.
- We discuss this literature in lines 83-94. We will also add the citations in line 30
- We have cleaned up the algorithm pseudocode description by including a table for notation as suggested by Reviewer hUSH. We also fixed multiple typos in the algorithm presentation.
If this has addressed the issues in your review, we hope that you will consider raising the score. | Summary: The paper addresses the problem of set-to-set similarity calculation and retrieval, which is a problem with any downstream applications. While previous approaches inevitably perform a brute force similarity calculation over |Q| query vectors and |S| target vectors for each set comparison F(Q, S), this paper leverages random hashing functions to estimate similarity between query and target vectors to avoid this brute force search. Combined with highly space-optimized hash tables, and building on top of previous centroid-based candidate set filtering mechanisms, the proposed algorithm achieves notable speed improvements over existing methods at only a small cost to recall. Theoretical support is provided for the algorithm, along with experiments on synthetic, solvable data and the standard MS MARCO benchmark dataset.
Strengths: - Very important: The proposed method cleverly utilizes randomness via an LSH based approach to estimate similarity and avoid brute force similarity calculation between sets of vectors, leading to notable improvements in speed over previous methods, at reasonable memory requirements for large scale settings, and with very limited negative impact on recall.
- Important: The experiments are simple and clear.
- Important: The choice of baselines seems appropriate. As the authors note, their method reaches points on the Pareto curve, w.r.t. latency, that are not reachable by baselines even though the baselines are nominally customizable for trading off latency vs. recall.
- Important: The paper is clearly written.
- Of some importance: Theoretical analysis is provided for the proposed method, although, as the authors note, they “assume sufficiently high relevance scores and large gaps in our theoretical analysis to identify the correct results.”
Weaknesses: - Important: As noted by the authors, this work draws heavily upon LSH-based NN vector search methods that already use LSH methods to approximate exact similarity calculation. In my opinion, the extension to the set-to-set comparison setting is fairly straightforward. This is a drawback of the methodological contribution of the paper, in terms of novelty, though it may be entirely outweighed by the positive empirical performance of the approach over strong baselines.
- Of some importance: The intro of the paper is written as if the paper proposes a generic framework for cross-query-vector aggregation (A2) and cross-target-vector aggregation steps (A1), but to my understanding, the only setting experimentally considered in the paper is that of A2 = weighted sum and A1 = max. This seems to be the main practical setting of interest in settings like passage retrieval, but I would not say this paper really broadens how people have been thinking about set-to-set comparison.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: - The paper argues that vector-level NN retrieval is clearly limited, but I think this assumes that one is really looking for a comparison using something like A2=weighted sum and A1=max. I think there are settings where one would be interested in A2=max and A1=max, as well as other variants. Are there problems besides passage retrieval that are better evaluated in terms of other combinations of A2 and A1 besides those considered in this paper?
- The set-to-set comparison problem might also be viewed more generally as comparison of two distributions over R^d. When |Q| and |S| are large relative to d, these distributions might be accurately estimated and then distributional divergence measures deployed. What do you think the trade-offs are with a vector similarity-based approach? (And could the perspectives be unified based on specifying A2/A1 and sim?)
- I think I never saw which specific LSH functions you used, besides using 16/32/64 of them. Are they all signed random projection to approximate cos-sim?
- What does perfect recall mean on synthetic data? That you already know what the correct top-1 avg-max-sim “passage” is?
- Can you increase the latency of your approach to reach the performance of PLAID?
- In the next version of the appendix, could you include the results of your hyperparameter grid search?
- Since you make a claim about Pareto improvement, it would be nice to include graphs visualizing the Pareto curves of the methods.
- More of a comment, really: You might add “passage retrieval” for language modeling pretraining and finetuning to the list of potentially valuable applications for this kind of approach. Current methods use simple heuristics like embedding averaging across tokens to reduce the problem to vector-level NN search: https://arxiv.org/pdf/2112.04426.pdf.
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 3 good
Limitations: Yes, the limitation section was satisfactory.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Regarding novelty: While LSH methods have been widely used for the last 20 years, our set search departs from the standard approach in some subtle but important ways. The vast majority of LSH search algorithms group points into hash tables, then explicitly compute similarities between points that fall into the same buckets. A naive adaptation of this method would actually require more memory than PLAID. Instead, we use collision statistics to rank sets based only on the distribution of hash codes, meaning that we do not store the d-dimensional embeddings. While similar ideas have recently been used for vector similarity search [1] and density estimation [2], this is still an emerging area and a new interpretation of LSH.
While it is true that the NLP community currently only uses A2 = (unweighted) sum and A1 = max, our theory shows that the set-search problem is tractable for a larger family of functions. We hope that this might lead to better modeling techniques (e.g., learned weights in the sum, or soft nonlinearities in place of the maximum), though this is beyond the scope of our present study. Our general analysis provides some insight about which kinds of functions might be good candidates (e.g., Lemma 4.2.1 identifies smooth, monotone functions as compatible with fast search).
__Questions:__
1. While it is true that the NLP community currently only uses A2 = (unweighted) sum and A1 = max, our theory shows that the set-search problem is tractable for a larger family of functions. We hope that this might lead to better modeling techniques (e.g., learned weights in the sum, or soft nonlinearities in place of the maximum), though this is beyond the scope of our present study.
Problems such as database lineage tracking [1] use a linear combination of average and max similarity (and [1] references papers that use different A1 / A2). Image instance retrieval can be formulated as A1=max and A2=max (e.g., over SIFT vectors), but again other compositions are possible [2]. We think that the design of A1 and A2 represents an exciting opportunity for future work, since our results show that a large class of similarity aggregation functions can be solved tractably.
[1] https://arxiv.org/pdf/2107.06817.pdf
[2] https://arxiv.org/pdf/2101.11282.pdf
2. This idea is very interesting. For divergence-based ranking to be feasible, we would need a fast, low-memory way to model high-dimensional distributions and compute divergences. Most methods scale poorly with input size and dimensionality (in both runtime and error), but some recent LSH-based algorithms [2, 3] can construct a kernel density estimate of the distribution in near-linear time. If a divergence measure can be constructed from these methods, it would lead to a tradeoff where DESSERT requires $O(m m_q)$ time to materialize the pairwise similarity matrix, but the divergence calculation runs in time $\tilde{O}(m + m_q + C)$ where $C$ is the cost to compute the divergence.
For example, consider the kernel density estimate of the total variation distance.
$$\int_z \left|\sum_{q \in Q}k(q, z) - \sum_{x \in S}k(x, z)\right|dz$$
If we approximate this integral by sampling point estimates at a set $Z$ of sampling locations (e.g. Monte Carlo estimation), we have an estimate of the form:
$$\frac{c}{|Z|}\sum_{z \in Z} \left|\sum_{q \in Q}k(q, z) - \sum_{x \in S}k(x, z)\right|dz$$
where $c$ is a scalar. If the quantity $k(x, z)$ is chosen from the family of LSH similarity kernels [2], it can be approximated via DESSERT if we choose $z$ from the set $Q$. This gives the decomposition:
$$A_1(A_2(q, S)) = c\sum_{z \in Q}\left| \sum_{z \in Q} k(q,z) - A_2(z, S) \right|$$
$$A_2(q,S) = \sum_{x \in S} \sigma(q, x)$$
$$\sigma(x,z) = k(x,z)$$
Unfortunately, it is not clear whether the restriction $z \in Q$ ruins the Monte Carlo estimator. It is also not clear whether other divergences (such as the KL divergence or Hellinger distance) can be similarly decomposed. Another problem is that nonparametric density estimation suffers from the curse of dimensionality: $|S|$ must grow much faster than $d$ to have constant estimation error (so this may only work for very large sets). This is a fascinating direction for future work.
References:
[1]: “FLASH: Randomized Algorithms Accelerated over CPU-GPU for Ultra-High Dimensional Similarity Search,” Wang et. al., SIGMOD 2018
[2]: “Sub-linear RACE Sketches for Approximate Kernel Density Estimation on Streaming Data,” Coleman and Shrivastava, WWW 2020
[3]: “Rehashing Kernel Evaluation in High Dimensions,” Siminelakis et. al., ICML 2019
3. Yes, the LSH functions are all signed random projections.
4. Yes, we computed the top-1 avg-max-sim “passage” to use as ground truth for the synthetic experiments.
5. Regarding whether DESSERT can match PLAID: We’ve conducted new experiments on LoTTE that show the full latency-performance tradeoff. Our experiments show that it is sometimes (but not always) possible to match the PLAID performance. This is probably due to the estimation error of our similarity approximation. Lemma 4.2.3 shows that in principle, this error can be driven arbitrarily small (as $L \to \infty$, the error between the DESSERT and PLAID ranking scores goes to zero with high probability). However, the error scales with $\frac{1}{\sqrt{L}}$, so zero error requires large values of $L$ that may not be feasible in practice.
6. We plan to include these results, as well as the Pareto-optimal hyperparameter configurations for LoTTE.
7. See our evaluation on LoTTE (pdf attached in the author rebuttal), where we show the full Pareto curves. On the plots, the PLAID result shows the lowest-latency result attainable by PLAID. We did not do the full curves for MS-MARCO due to computational constraints.
8. Thank you for the reference! We will mention this in the revision.
If our response has addressed the concerns raised in your review, we hope you will consider raising the score.
---
Rebuttal Comment 1.1:
Title: Reply to rebuttal
Comment: Thanks for the thorough reply. The response addresses all of my questions. Where my main concern was novelty, I appreciate that a naive extension of existing methods to the set-to-set setting would be extremely memory intensive, and the proposed algorithm makes some advances on that front. Other parts of the response point out ways that the paper theory and method are relatively general across possible applications, although the experiments in the paper remain somewhat narrow in their focus. I raise my score from 6 to 7 as a result, though I am keeping my confidence at 2 because I am not intimately familiar with the area for the paper. | Rebuttal 1:
Rebuttal: We thank all of the reviewers for their thoughtful feedback!
It seems the most common theme expressed in the reviews was a desire for evaluation on more datasets. To that end, we have run 10 additional evaluations on the LoTTE dataset for out-of-domain retrieval. Interestingly, we found that DESSERT performed very well compared to PLAID in this setting. We hope that this sufficiently addresses the concerns of those reviewers who desired to see an evaluation on more datasets. We have attached a pdf plotting the pareto curves for DESSERT and PLAID on these datasets.
Pdf: /pdf/755a1ff50b6080186a1339888175b52a288a6e31.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: The paper presents a study focused on the problem of vector set search with vector set queries, which is a crucial subroutine for various web applications. The authors highlight the insufficiency of existing solutions in terms of speed. To address this, they propose a new approximate search algorithm called DESSERT. DESSERT is a versatile tool that offers robust theoretical guarantees and demonstrates excellent empirical performance. The authors integrate DESSERT into ColBERT, a highly optimized state-of-the-art semantic search method, and report a notable 2-5x speed improvement on the MSMarco passage ranking task.
Strengths: 1. The problem formulation is truly innovative. Vector set search is undeniably an essential and challenging task, with practical applicability in serving LLMs.
2. The proposed methodologies are lucid and firmly grounded, ensuring the reproducibility of the research. DESSERT uses hash tables for element-wise similarity search to build a data structure for vector set similarity search. Moreover, DESSERT taking advantage of hash tables in terms of query speed and performs an efficient query for similar sets.
3. DESSERT has a detailed theoretical analysis in terms of inner and outer aggregation, which justify its choices of aggregation functions in the algorithm.
Weaknesses: It would be better to have a section that unifies the notation for algorithms and theory. Maybe a table would be better.
Overall, I have a positive impression on the paper. If authors are willing to further polish the theory part of the paper, I'm willing to raise the score.
Technical Quality: 4 excellent
Clarity: 3 good
Questions for Authors: 1. How can DESSERT be applied to different similarity measures?
2. How to intuitively explain the running time and memory? How do the terms $L$ and $T$ relate to the search quality?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 3 good
Contribution: 3 good
Limitations: The authors adequately addressed the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your review and suggestions. We agree that the theory is a little hard to read, so we’ve added a table to disambiguate. We’ve reproduced (part of) the table below and have added this to the appendix. We have also updated the notation in the algorithm listings, since there were some mistakes (e.g., $k$ instead of $L$ as the hash subscript in line 4 of Algorithm 2). We also plan to standardize the notation between the practical and theoretical sections (e.g., we can rewrite Algorithm 2 to replace the “count” variable with $\hat{\mathbf{s}}(q, S_i)$). We thank you for pointing out these discrepancies and we plan to carefully consolidate the notation as much as we can.
| Notation | Definition | Explanation |
| ---------------------------- | --------------------------------------------------------- | ---------------------------------------------------------------------------------------- |
| $D$ | Set of target sets | Collection of documents |
| $N$ | Cardinality $\\vert D \\vert$ | Number of documents |
| $\\mathcal{D}$ | DESSERT index of $D$ | Search index data structure |
| S | Target set | Document (text passage) |
| Q | Query set | Search phrase (text passage) |
| $x \\in S_i$ | Vector in target set $S_i$ | Embedding |
| $q \\in Q$ | Vector in query set $Q$ | Embedding |
| $m_i$ | Cardinality $\\vert S_i \\vert$ | Length of $i$th document | | |
| $\\mathrm{score}_i$ | Estimate of $F(Q,S_i)$ | Approximate relevance / search ranking score |
| $L$ | Number of hashes | Larger $L$ increases the accuracy of $\\mathrm{score}$ at the cost of space and latency. |
__Questions:__
*Q1: How can DESSERT be applied to different similarity measures?*
To use DESSERT with a similarity measure other than cosine similarity, we can use a different LSH function. There are well-known LSH functions for similarities based on Euclidean, Manhattan, and Lp-norm distances [1] as well as measures defined over sets and Hamming codes (such as Jaccard similarity [2-4], edit distance, etc). There is also a developing research area using asymmetric hashing techniques [6] to design more flexible, asymmetric similarity measures.
[1]: “Locality-Sensitive Hashing Scheme Based on p-Stable Distributions,” Datar et. al. in SCG 2004.
[2]: “One Permutation Hashing,” Li et. al. in NIPS 2012
[3]: “Densifying One Permutation Hashing via Rotation for Fast Near Neighbor Search,” Shrivastava and Li, ICML 2014
[4]: “Re-randomized Densification for One Permutation Hashing and Bin-wise Consistent Weighted Sampling
[5]: “Locality-sensitive hashing for the edit distance,” Marcais et. al., Bioinformatics, Volume 35, Issue 14, July 2019
[6]: “Asymmetric LSH (ALSH) for Sublinear Time Maximum Inner Product Search (MIPS),” Shrivastava and Li, NIPS 2014
*Q2: How to intuitively explain the running time and memory? How do the terms L and T relate to the search quality?*
T is the maximum number of elements in a single set that share the same hash code. This bound can be loosely interpreted as saying, “No set $S_i$ contains too many elements that are very similar to a single query vector $q \\in Q$.” In the text-search setting, T can be thought of as the number of duplicate words (or very close synonyms) in the document.
To develop intuition for why we need this bound, look at lines 9-10 of Algorithm 2. If an element of the query set $Q$ collides with every element in a target set $S_i$, the loop in lines 9-10 takes time $|S_i|$. If this happens for every target set, the resulting algorithm is asymptotically no better than brute force (albeit with expensive distance calculations replaced by cheap integer operations). However, this also does not happen except for in highly degenerate, unrealistic cases where the document consists entirely of duplicate / near-duplicate terms (e.g., the document “cat cat cat kitten cat” and the query “cat kitten”). In practical search datasets, we’ve found that T is fairly small (usually under 20).
L is the number of hash functions used to approximate the similarity. L is a design parameter - higher values of L require a larger index size and query time but result in a smaller error when estimating the similarity scores. This can be seen from Lemma 4.2.3: for a fixed failure probability $\delta$, the estimation error decays with $\frac{1}{\sqrt{L}}$.
We will add this discussion to Section 4.3.
If this rebuttal has addressed your concerns, we hope you may consider raising the score.
---
Rebuttal Comment 1.1:
Comment: Thanks for your response! I like the notation table a lot and hope you can incorporate it in your revision. I'm also happy to have my questions answered and concerns addressed. Thus, I'll raise my score to 7. | Summary: This paper considers a nearest neighbor search problem where each point is a set of vectors, and the distance function is drawn from a general class of aggregation functions over the vectors.
The approach presented here is based on the LSH algorithms, but since we're dealing with multiple vectors, the bounds behind LSH need to be re-derived. Assumptions used in order to allow or speed up the search include that the distance function obeys a certain Lipschitz smoothness property, and tools include inverted indices, sampling, and compressed tables. These are all fairly standard in the field.
Strengths: The query model introduced is interesting, and the authors make a good argument as to its utility. The techniques utilized are non-trivial, and constitute a contribution to the field of nearest neighbor search. The fact that the distance function is very general is also a plus.
Weaknesses: The presentation can be somewhat improved. The authors actually put in significant work into this direction, but the presentation can be improved by adding a high-level overview of the approach before introducing the pseudo-code. This is a more effective approach than referring to code to explain an approach.
The authors should also add background on LSH and inverted indices, so that the paper can be more accessible to non-experts.
The explanation of the motivation behind TinyTable is garbled, and a clearer exposition is necessary.
---------------------------------
In response to the author rebuttal, and in particular the improvement in presentation, I raised my score.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: None.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for pointing out the presentation issues.
Regarding high-level overview: Good suggestion. We’ve added the following description to Section 3: “At a high level, DESSERT compresses the collection of target sets into a form that makes similarity operations efficient to calculate. This is done by replacing each element of the set with its LSH codes, transforming the set into an integer array of hash values. At query time, these hash values are compared with the corresponding hashes of the query set elements to approximate the pairwise similarity matrix (Figure 1). This matrix is used as the input for aggregations $A_2$ and $A_1$ to rank the target sets.”
Regarding background: We began with a longer text that contained a thorough exposition on LSH and inverted indices, which we had to cut down due to the NeurIPS page limit. In the revision, we have put this background back into the appendix and provided references in the main text for the interested reader.
Regarding TinyTable: After a second read-through, we agree. We’ve drafted a revised description of TinyTables, which we hope will be clearer:
“DESSERT has two features that constrain the underlying hash table implementation: (1) every document is represented by a hash table, so the tables must be low memory, and (2) each query performs many table lookups, so the lookup operation must be fast. If (1) is not met, then we cannot fit the index into memory. If (2) is not met, then the similarity approximation for the inner aggregation step will be far too slow. Initially, we tried a naive implementation of the table, backed by an std::vector, std::map, or std::unordered_map. In each case, the resulting structure did not meet our criteria, so we developed TinyTable, a compact hash table that optimizes memory usage while preserving fast access times. TinyTables sacrifice O(1) update-access (which DESSERT does not require) for a considerable improvement to (1) and (2).”
If this rebuttal has addressed your concerns, we hope you may consider raising the score. | null | null | null | null |
DaTaSeg: Taming a Universal Multi-Dataset Multi-Task Segmentation Model | Accept (poster) | Summary: This paper proposes the universal model for image segmentation using multi-dataset multi-task training. By using universal segmentation representations at the entity (thing or stuff) level, the paper use merge operation for different segmentation task. Experiments show the effectiveness of the proposed method.
Strengths: 1, The paper is well-written and easy to understand.
2, The extensive experiments show the effectiveness of multi-dataset multi-task training.
Weaknesses: 1, Limited novelties. The main contribution of this paper is multi-dataset multi-task training. However, there is no obvious difference to OneFormer except for the multi-dataset training. To solve the label space conflict of multi-dataset training, the paper directly uses the language embeddings of the language model that has been widely explored.
2, Merging universal segmentation representations seems like following the fine-to-coarse pipeline. There is no obvious clue to prove whether this fine-to-coarse is better than the coarse-to-fine pipeline.
3, Weakly-supervised instance segmentation module also has limited novelties by just using the projection loss of BoxInst to add the larger dataset objects365.
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: 1, Could the author try the other sampling strategy in multi-dataset training? In the paper, it would lead to the imbalance of dataset sampling.
2, The performance improvement on COCO mainly comes from using the objects365 dataset. Direct pretraining object365 and then finetuning to COCO could perform better than the proposed method in the paper.
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 2 fair
Limitations: This paper cannot solve the annotation inconsistency of multi-dataset training. For example, the `window' defined in COCO and ADE20K are different where COCO stuff but ADE20K thing.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the helpful feedback.
>**W1: Novelties and difference to OneFormer:**
There are multiple significant differences between DaTaSeg and OneFormer. Though we discussed the most significant difference in L27 and L90, We detail more differences below:
1. Multi-dataset training is challenging (Reviewer EhPn) and using the same set of parameters (i.e., a single model) for multiple datasets saves computational resources (Reviewer 7Lho). OneFormer doesn't support multi-dataset training while DaTaSeg does. We emphasize that it's non-trivial to develop a single model for multi-dataset training, while attaining good performance on all datasets. The final simple yet effective design is in fact optimized from another more complicated design, as explained in supplementary Sec. D.
2. With our multi-dataset training, we 1) leverage knowledge from multiple datasets, containing different types of annotations, to improve segmentation performance on all datasets; 2) enable weakly-supervised instance and panoptic segmentation through knowledge transfer; 3) directly enable open-vocabulary segmentation. All these points are missing from OneFormer.
3. There are significant differences between DaTaSeg's and OneFormer's multi-task training strategy: we handle different segmentation tasks by post-processing (our MERGE operation), and OneFormer handles different segmentation tasks by defining a task prompt and using task-specific queries. This means OneFormer would have different model outputs for different tasks, while our model always produces the same, universal mask representation. Advantages of our framework include enhancing knowledge sharing among tasks, and potential better generalization to more future segmentation tasks.
4. Different supervision: OneFormer only uses fully-supervised training; while DaTaSeg explores weaker supervision. DaTaSeg achieves weakly-supervised instance segmentation using only bbox annotation.
5. Different network architectures: Except from the above differences, OneFormer first obtains a set of text queries using a text encoder and then applies a contrastive learning between the text queries and the object queries. DaTaSeg does not use such text queries.
>**W2: Fine-to-coarse pipeline:**
* The fine-to-coarse pipeline of merging universal segmentation representation is straightforward, simple, and effective.
* Designing a coarse-to-fine pipeline is challenging, since we need to decompose semantic segmentation into meaningful instance segmentation masks.
* Our fine-to-coarse pipeline is non-ambiguous, while ambiguity is hard to eliminate in a coarse-to-fine pipeline.
>**W3: Novelties in weakly-supervised instance segmentation module:**
* Our novelty lies in the proposed multi-dataset multi-task universal segmentation framework. We aim to design a simple, yet effective network architecture, and the projection loss from BoxInst fits our goal well.
* We propose to use a universal segmentation representation (Sec. 3.1) with a fully-shared network architecture (Sec. 3.4), and apply different losses to different types of segmentation tasks.
With the projection loss, we perform bounding-box-supervised instance segmentation *without any modification to the architecture*, which also performs well empirically.
* To the best of our knowledge, we are the first to apply the projection loss in the multi-dataset multi-task segmentation setting.
>**Q1: Sampling strategy in multi-dataset training:**
* Our sampling strategy (L194-L199) avoids the imbalance of dataset sampling by specifying the per-dataset sampling ratio. The proportion of training samples coming from each dataset in the whole training process is determined by that sampling ratio.
* In our main results, the sampling ratio is 1:4:4 for ADE:COCO:O365. We ablate the sampling ratio on a Resnet50 backbone and show the results in the table below.
| Sampling ratio | ADE semantic | COCO panoptic | ADE panoptic | O365 instance |
|:--------------:|:------------:|:-------------:|:------------:|:-------------:|
| 1:4:4 | 48.1 | 49.0 | 29.8 | 14.3 |
| 1:2:2 | 46.8 | 48.6 | 29.1 | 12.8 |
| 1:1:1 | 45.3 | 48.0 | 28.4 | 13.7 |
* Results show that our adopted sampling ratio is better than the other sampling ratios.
>**Q2: "Improvement on COCO mainly comes from Objects365":**
* In Table 2 of the submission, we have shown the results of training DaTaSeg on all combinations of the three datasets. With or without cotraining on Objects365, COCO performance is approximately the same. Hence, the performance improvement on COCO does not mainly come from using the Objects365 dataset.
* We also experimented with training DaTaSeg from a checkpoint pretrained on O365 (3rd row in Table 2 in our submission), using ResNet50 backbone. The results are shown in the table below:
| Pretrain data | ADE semantic | COCO panoptic | ADE panoptic | O365 instance |
|--------------:|:------------:|:-------------:|:------------:|:-------------:|
| IN+O365 | 47.6 | 48.7 | 28.8 | 13.6 |
| IN | 47.2 | 48.7 | 29.4 | 14.5 |
* Results demonstrate that pretraining on O365 does not improve the performance for DaTaSeg. We hypothesize it's because weak bouding box pretraining doesn't help segmentation-only model, like DaTaSeg.
>**Limitations: Annotation inconsistency of multi-dataset training:**
* For stuff categories, we apply MERGE operation on mask predictions to obtain final predictions, during both training and inference. Therefore, our method can predict the window category as “stuff” category in COCO and “thing” category in ADE20k.
This inconsistency is one motivation behind our design (L117-119): We first adopt a universal segmentation representation for different tasks, and then treat them differently in merging and postprocessing.
---
Rebuttal Comment 1.1:
Comment: Thanks for the author's effort in rebuttal. It still cannot convince me very much.
In Table 2, using both COCO panoptic and O365 datasets brings 18.3 mIoU on ADE semantic task. That means the annotation inconsistency still exists to some extent. It is similar to the setting about training the model on ADE semantic and O365 box and inference it on COCO Panoptic. Another reason is the image and task domain gap. That might be why the paper uses the ADE semantic instead of the ADE panoptic dataset.
Also, merge operations can somewhat solve the annotation inconsistency. When doing inference, the user should select whether to adopt the merge operation based on different datasets.
Anyway, I appreciate the difficulties of multi-dataset training and the heavy workload. Thus, I tend to vote for borderline acceptance. However, I cannot rate higher scores because the problem of the multi-dataset training for segmentation still needs to be solved.
---
Reply to Comment 1.1.1:
Comment: Dear Reviewer 8rwi,
Thank you for recognizing the difficulties in our problem setting and our hard work!
We carefully address your comments below.
- **Table 2 performance:** When not training on ADE semantic or COCO panoptic, one primary reason for the lower performance is that the model only has limited knowledge about the categories in the untrained dataset. While it's interesting to transfer to untrained datasets, it's not the main problem that our submission is trying to address.
- **Why we use ADE semantic instead of ADE panoptic:**
1. In our problem setting, we want to train on a suite of datasets of different segmentation tasks. ADE20K is one of the most widely-used benchmarks for semantic segmentation, so we choose to train on ADE20K semantic dataset. If we also train on ADE20K panoptic dataset, then the performance improvement on ADE20K semantic may come from ADE20K panoptic training (we note that semantic annotation is a subset of panoptic annotation which includes both semantic categories and instance identities), rather than from cotraining on other datasets -- which makes it harder to argue the benefits of multi-dataset multi-task training.
2. Besides, we are also curious about how well our model can perform on ADE20K panoptic without directly training on it, while only exploiting the weaker semantic annotations.
- **Merge operation:** During inference time, the type of merge operation applied is decided by the desired segmentation task, as discussed in L151 of the paper. The user only needs to specify the segmentation task to perform.
- **The problem of the multi-dataset training for segmentation is not solved:** Yes, we agree. This field is underexplored. Given the great benefits of multi-dataset multi-task segmentation models, we see our paper as one of the very first few works that take an initial step in this direction. We look forward to more future work to further improve the performance. | Summary: This paper proposes DaTaSeg, a universal multi-dataset multi-task segmentation model. DaTaSeg uses a shared representation and different merge operations and post-processing for different tasks. Weak-supervision is employed for cheaper bounding box annotations and knowledge is sharing across different datasets with text embeddings from the same semantic embedding space and shared network parameters. A subset of the Objects365 validation set is annotated for instance segmentation. Experiments shows that DaTaSeg gets improved performance on dataset-specific models and enables weakly-supervised knowledge transfer. DaTaSeg also scales with the training dataset number and enables open-vocabulary segmentation.
Strengths: 1) This paper is easy to follow.
2) The method proposed in this paper will be useful in the future as it achieves the segmentation task under the multi-task multi-dataset setting.
Weaknesses: The ablation study in the paper is somewhat not sufficient. The hyperparameters λ in equation (4) and μ in equation (5) are not provided and there needs to be experiments on the impact of hyperparameters.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: This paper proposes the multi-dataset multi-task segmentation model. The hyperparameters λ and μ in the equations (4) and (5) are crucial for the training. Could you provide the experiments on the impact of hyperparameters on the model performance?The authors do not discuss Limitations in the paper.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The authors do not discuss Limitations in the paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for recognizing our contributions and the helpful feedback. We carefully address the comments below.
>**Weaknesses and Questions: Hyperparameters and ablation study:**
- Our settings of $\lambda$ and $\mu$ closely follow Maskformer [A] and Mask2former [B], and we add the weights for the bounding-box projection loss $L_{proj}$ for the O365 dataset.
- We did an ablation study on the weights for the projection loss $L_{proj}$, $\lambda_{proj}$ and $\mu_{proj}$, using ResNet50 backbone, as shown in the table below:
| $\lambda_{proj}$ | $\mu_{proj}$ | ADE semantic | COCO panoptic | ADE panoptic | O365 instance |
|-----------------:|-------------:|:------------:|:-------------:|:------------:|:-------------:|
| 5.0 | 1.0 | 45.5 | 48.3 | 28.4 | 12.5 |
| 2.0 | 0.5 | 47.2 | 48.7 | 29.4 | 14.5 |
- The results indicate that the 2nd setting (our final setting) has a better performance on all datasets.
- Finding the optimal weights for multiple losses is an interesting but challenging problem, and we leave it for future work.
- We will add the above table, and the tables in our rebuttal to reviewer t7xz and 8rwi to ablation studies in the revision.
>**Limitations:**
- We have a "Limitations" section in supplementary Sec. G.
[A] Cheng, Bowen, Alex Schwing, and Alexander Kirillov. "Per-pixel classification is not all you need for semantic segmentation." NeurIPS 2021.
[B] Cheng, Bowen, et al. "Masked-attention mask transformer for universal image segmentation." CVPR 2022. | Summary: [Tasks] This paper introduces DaTaSeg, a universal multi-dataset multi-task segmentation model that addresses the interconnections between panoptic, semantic, and instance segmentation tasks.
[Methods] DaTaSeg utilizes a shared representation, consisting of mask proposals with class predictions, across all tasks. To handle task discrepancies, the model employs distinct merge operations and post-processing techniques tailored to each task. Additionally, DaTaSeg leverages weak supervision, enabling cost-effective bounding box annotations to enhance the segmentation model. To facilitate knowledge sharing across datasets, DaTaSeg utilizes text embeddings from the same semantic embedding space as classifiers and shares all network parameters among datasets.
[Experiments] The model is trained on ADE semantic, COCO panoptic, and Objects365 detection datasets. DaTaSeg exhibits improved performance across all datasets, particularly for small-scale datasets, achieving a 54.0 mIoU on ADE semantic and a 53.5 PQ on COCO panoptic. Furthermore, DaTaSeg enables weakly-supervised knowledge transfer for ADE panoptic and Objects365 instance segmentation.
[Results] Experimental results indicate that DaTaSeg scales effectively with the number of training datasets and facilitates open-vocabulary segmentation through direct transfer.
[Dataset] Additionally, the authors have annotated an Objects365 instance segmentation set comprising 1,000 images, which will be released as a public benchmark.
Strengths: 1. The authors have conducted comprehensive experiments, showcasing the state-of-the-art performance achieved on multiple long-tailed recognition benchmarks. This highlights the robustness and effectiveness of their proposed method.
2. The paper is skillfully organized, making it easy to follow. The logical structure and clear presentation enhance the reader's understanding of the research.
3. Efficient Use of Supervision. While previous works have explored training universal models on multiple datasets and tasks, a notable strength of this paper is the effective utilization of weak bounding box supervision for segmentation. Compared to full mask annotations, weak bounding box supervision is a more cost-effective and practical solution. This approach makes the proposed method more accessible and applicable in real-world scenarios.
Weaknesses: [Technical contributions on multi-task multi-task model.] One of the key contributions of this paper is the joint training of multiple datasets and multiple tasks within a unified framework. However, it should be noted that this aspect has already been explored by [1]. The referenced work employs a single set of parameters pre-trained for Semantic/Instance/Panoptic Segmentation, Referring Segmentation, Image Captioning, and Image-Text Retrieval tasks. Therefore, it might be necessary for the authors to clarify the differences between this work and X-Decoder.
[Technical contributions on text embedding classifier.] The utilization of a text embedding classifier has also been explored in previous works, such as [2]. In this work, an image encoder is trained to encode pixel embeddings, while CLIP text embeddings are employed as the per-pixel classifier. The key ideas are quite similar, although there are some differences on how to better leverage the text embeddings.
[Lack of comparisons with some published works and subpar performance compared to state-of-the-art methods.] The paper primarily compares the model's performance against methods designed or trained solely on a single task, referred to as "Separate" in the paper. However, it is worth noting that DaTaSeg benefits from training on a significantly larger sample size compared to these listed works. Consequently, the observed performance gains over the "Separate" models are not unexpected. Furthermore, there are published works, such as X-Decoder [1], that jointly train models on multiple datasets and tasks, yielding superior performance on many benchmarks compared to the reported results in this paper.
On ADE semantic (mIoU): X-Decoder achieves 58.1 compared to DaTaSeg's 54.0; on COCO Panoptic (PQ): X-Decoder outperforms DaTaSeg with 56.9 versus 53.5. In most benchmarks, DaTaSeg performs worse than X-Decoder.
[1] "Generalized decoding for pixel, image, and language." Zou, Xueyan, Zi-Yi Dou, Jianwei Yang, Zhe Gan, Linjie Li, Chunyuan Li, Xiyang Dai et al. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 15116-15127. 2023.
[2] "Language-driven Semantic Segmentation." Boyi Li and Kilian Q Weinberger and Serge Belongie and Vladlen Koltun and Rene Ranftl. International Conference on Learning Representations. 2022
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Most of my questions are listed in the Weaknesses section, with my main concerns focused on the technical contributions and performance comparisons with some published works. However, I have a couple of minor questions:
1. Will the code and model weights be made available to the public? It would be valuable to have access to these resources for further exploration and replication of the proposed method.
2. Given that this model can be trained with weakly-supervised tasks, I'm curious if further improvements in performance can be achieved by training on a larger quantity of weakly-supervised samples. Are there any trade-offs between the quality and quantity of the samples when it comes to enhancing model performance?
I believe addressing these questions would provide additional insights into the practicality and potential improvements of the proposed method.
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: Yes, the authors adequately addressed the limitations
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the detailed and helpful comments, and for recognizing multiple strengths of our submission. We carefully address the comments and questions below.
>**W1: Technical contributions on multi-task multi-task model:**
- We train a single universal segmentation model on multiple segmentation tasks and multiple segmentation datasets, in one stage. As reviewer EhPn pointed out, we explore a very challenging task. However, the Table 1 results in the X-Decoder paper is **"task-specific transfer"**, which follows the “first pre-training then fine-tuning” paradigm to achieve the best performance on each dataset. It also requires separate fine-tuned model weights for each dataset.
- The joint pre-training in X-Decoder only includes panoptic segmentation, referring segmentation, and image-text pairs, which has a focus on vision-language tasks. These tasks are very different from ours: we cotrain on three mainstream segmentation tasks (semantic/instance/panoptic) and exclusively focus on segmentation. This difference also leads to significantly different training data.
- Besides, we novelly propose to include a simple weakly-supervised instance segmentation training using bounding-box supervision to increase the training data, since mask supervision is expensive and thus limited in scale.
- We'll cite X-Decoder and include the comparison in the revision.
>**W2: Technical contributions on text embedding classifier:**
- We thank the reviewer for the question. We note that we have never claimed that it is our major novelty to use a text embedding classifier. Instead, our novelty lies in using the text embedding classifier for knowledge sharing among different datasets (L186-188). On the other hand, open-vocabulary segmentation, like [2], mainly uses it to support arbitrary text queries. In L190, we have acknowledged the same technique is used in open-vocabulary segmentation, and discussed the difference.
>**W3: Comparisons with published works:**
- We appreciate the comments, and agree that X-Decoder is a great work. However, there are multiple differences between our work and X-Decoder, which makes it hard to have a strict comparison. We detail the differences below:
1. **Training paradigm:** As we pointed out in W1, we directly cotrain on multiple datasets using a shared set of parameters (single model), while the ADE and COCO results in X-Decoder Table 1 is **task-specific transfer**. That is, X-Decoder first pretrains on large-scale data and then fine-tunes on each target dataset using different sets of fine-tuned parameters.
2. **Training data:** X-Decoder pretrains on COCO panoptic and referring segmentation and 4M image-text pairs with a long schedule, and we quote from their paper: *"all the pre-trained models are trained with 50 epochs of COCO data and roughly 45 epochs of 10 million image-text pairs"*. Afterwards, they fine-tune the model on COCO and ADE20k. By contrast, we only train on COCO panoptic, ADE20k semantic, and weakly-supervised Objects365 v2 detection with a total of 1.8M of training images, which is much fewer than X-Decoder.
3. **Model architecture:** X-Decoder adopts Mask2Former. Our model architecture is different as explained in supplementary Sec. E.
- Finally, we note that X-Decoder is a concurrent work that was published in CVPR 2023, June 2023, while the NeurIPS submission deadline was in May 2023.
- Therefore, we think comparing with the "separated" baselines under the same setting is reasonable to show our proposed multi-dataset multi-task training scheme improves segmentation performance.
>**Q1: Open-sourcing:**
- We thank the reviewer for the question. Indeed, we will release the code and model weights to the public, upon acceptance of the submission.
>**Q2: Trade-offs between the quality and quantity of the training samples:**
- Thanks for the great suggestion. In Table 2 of the submission, we show the results with and without cotraining with Objects365. The performance is generally better when cotraining with O365.
- In addition, we experimented with cotraining on different portions of the O365 dataset: we cotrain DaTaSeg on COCO panoptic, ADE semantic, and 10%/25%/50%/100% of O365 training data, on the ResNet50 backbone. We show the results in the table below:
| Ratio of O365 training data | ADE semantic | COCO panoptic | ADE panoptic | O365 instance |
|--------------------:|:------------:|:-------------:|:------------:|:-------------:|
| 10% | 48.5 | 48.7 | 30.0 | 10.4 |
| 25% | 48.6 | 48.3 | 30.6 | 12.0 |
| 50% | 47.7 | 48.5 | 29.8 | 13.1 |
| 100% | 47.2 | 48.7 | 29.4 | 14.5 |
- The results show that when increasing the number of O365 weakly-supervised samples, O365 performance increases, COCO panoptic performance is not affected, and ADE20k performance slightly decreases. Overall, the gains are larger than the losses. We will include this interesting analysis in the revision.
---
Rebuttal Comment 1.1:
Comment: I would like to extend my appreciation to the authors for their efforts in addressing my questions. While some of my concerns have indeed been adequately resolved, I must express that I still have reservations regarding the technical contributions and the comparisons made with certain published works.
1. The authors have argued that comparing the X-Decoder (XD) should be avoided due to its recent publication at CVPR 2023. I acknowledge this perspective, however, it's worth noting that ***X-Decoder was made available on ArXiv about 9 months*** ago and was accepted at CVPR around 5 months ago. Considering the dynamic nature of this field, I believe it is not unreasonable to consider a comparison.
2. In relation to the task variation, the authors have contended that XD's advantage largely stems from being trained on more data. It's important to note that the additional data largely comes from ***image-text paired data***, which ***is potentially easier to label than detection datasets***. I regard the ability to encompass a broader set of tasks as a strength rather than a weakness, particularly considering that the performance gains and the capability of supporting more tasks are attained through training on relatively *cheap* image-text pairs. \
Additionally, I would like to bring to the authors' attention the existence of OpenSeeD (accepted at ICCV2023) [2], which also harnesses detection data to enhance segmentation tasks. *Please note that I do not expect a comparison to [2] to be drawn. It's merely offered for your reference.*
3. The authors point out that the primary contributions stem from "include a simple weakly-supervised instance segmentation training using bounding-box supervision to increase the training data". Nonetheless, it's worth noting that weakly supervised instance segmentation is already a well-established task, and ***the Projection Loss employed in the paper to facilitate this approach is directly borrowed from [1]***. Given these factors, I find it challenging to regard this as a strong technical contribution. Furthermore, it can be argued which of the two is actually less resource-intensive: detection data or image-text data.
Thank you once again for your time on addressing my questions!
[1] Tian, Z., Shen, C., Wang, X. and Chen, H., 2021. Boxinst: High-performance instance segmentation with box annotations. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 5443-5452).
[2] Zhang, H., Li, F., Zou, X., Liu, S., Li, C., Gao, J., Yang, J. and Zhang, L., 2023. A simple framework for open-vocabulary segmentation and detection. arXiv preprint arXiv:2303.08131.
---
Reply to Comment 1.1.1:
Comment: Dear Reviewer t7xz,
We thank the reviewer for their review efforts and additional comments. Before we address the concerns, we would like to emphasize that as mentioned in the rebuttal, X-Decoder is a pioneering work and we are very willing to cite and compare it in our revision.
Now, we carefully address the concerns below.
>***Comparison with X-Decoder***:
We simply added a note that X-Decoder is published at CVPR 2023. We did not argue that comparing with X-Decoder should be avoided --- We promised in our rebuttal W1: *"we'll cite X-Decoder and include the comparison in the revision"*. Given the several key differences (rebuttal W1 and W3) between our work and X-Decoder, it's still hard to have a strict apple-to-apple comparison though. Below, we highlight the differences again:
1. **Task and Focus:** The pre-training in X-Decoder includes panoptic segmentation, referring segmentation, and image-text pairs (image-text retrieval and image captioning), which has a focus on vision-language tasks. These tasks are very different from ours: we cotrain on three mainstream segmentation tasks (panoptic/semantic/instance) and exclusively focus on segmentation.
2. **Training paradigm:** We directly cotrain on multiple datasets using a shared set of parameters (single model), while the ADE and COCO results in X-Decoder Table 1 is **task-specific transfer**. That is, X-Decoder first pretrains on large-scale data and then fine-tunes on each target dataset using different sets of fine-tuned parameters.
3. **Training data:** X-Decoder pretrains on COCO panoptic, referring segmentation and image-text pairs with a long schedule, and we quote from their paper: *"all the pre-trained models are trained with 50 epochs of COCO data and roughly 45 epochs of 10 million image-text pairs"*. Afterwards, they fine-tune the model on COCO and ADE20K separately. By contrast, we co-train on COCO panoptic, ADE20K semantic, and weakly-supervised Objects365 v2 detection with a total of 1.8M of training images. We do not train on large amounts of text data.
4. **Model architecture:** X-Decoder adopts Mask2Former. Our model architecture is different as explained in supplementary Sec. E. More importantly, we don't have an ***online text encoder*** in our model architecture.
Consequently, having a strict apple-to-apple comparison will require a careful alignment of each setting, which is beyond the scope of this work and rebuttal.
>***Image-text vs. detection data:***
We agree with the reviewer that *the ability to encompass a broader set of tasks as a strength rather than a weakness*, and we never claim that it is a weakness. This is also one of our motivations to include various types of segmentation annotations from multiple datasets for training a single model.
We emphasize that both image-text paired data and detection data are valuable training data. However, it is beyond the scope of this work and rebuttal to compare which one is better.
Finally, we thank the reviewer for bringing OpenSeeD to our attention. We are also happy to cite OpenSeeD in our final revision.
>***Projection loss:***
We thank the reviewer for the comment. We emphasize that we never claim that it is our *primary contribution* to use the projection loss. We have carefully phrased our contributions in the draft and rebuttal. Our primary novelty is that we **propose a single unified framework for multi-dataset multi-task segmentation**. We list other contributions below:
1. With our multi-dataset multi-task training, we 1) leverage knowledge from multiple datasets, containing different types of annotations, to improve segmentation performance on all datasets, especially smaller-scale datasets, such as ADE20K; 2) enable weakly-supervised instance and panoptic segmentation through knowledge transfer; 3) directly enable open-vocabulary segmentation.
2. We propose to use a universal segmentation representation (Sec. 3.1) with a fully-shared network architecture (Sec. 3.4), and apply different losses to different types of segmentation tasks. With the projection loss, we perform bounding-box-supervised instance segmentation *without any modification to the architecture*, which also performs well in the co-training empirically. | Summary: This paper proposes DaTaSeg, a general multi-dataset multi-task segmentation model. It utilizes shared representations and different pooling operations to perform panoramic, semantic and instance segmentation tasks. DaTaSeg benefits from weak supervision and knowledge transfer across datasets. It outperforms separate training on all datasets (especially smaller ones) and enables weakly supervised segmentation. The model also transfers well to unseen datasets and supports open vocabulary segmentation.
Strengths: The paper addresses the challenge of training a single model on multiple segmentation tasks and datasets by proposing DaTaSeg, which has the potential to save computational resources and streamline the development of segmentation models. It can also benefit from weak supervision by incorporating cheaper bounding box annotations.
The proposed DaTaSeg shows promising results in transferring to unseen datasets and enabling open-vocabulary segmentation.
The authors annotate a subset of the Objects365 dataset and plan to release it as a public benchmark for instance segmentation. This contributes to the research community by providing a standardized evaluation dataset.
Weaknesses: The Segment Anything model (SAM) released in April 2023 should be included for comparison as it's also a universal segmentation model which has zero/few-shot capability.
Technical Quality: 4 excellent
Clarity: 3 good
Questions for Authors: May need proofreading:
Line 35, "which are" -> "which is"
Line 38, "which map" -> "which maps"
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 3 good
Contribution: 4 excellent
Limitations: According to the F section in Supplementary Material, the computational cost is quite high which may affect the reproducibility. Consider releasing the pre-trained models in various sizes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for recognizing our contributions and the helpful feedback. We carefully address the comments below.
>**Weaknesses: Comparison with SAM:**
We thank the reviewer for the suggestion. We are happy to include a comparison with the Segment Anything paper in the revision. We note that SAM is a great paper, but there are several major differences between DaTaSeg and SAM.
1. The main use-case for SAM is prompt-based segmentation, i.e., the user inputs a point or box, and the model segments the object referred by the prompt. While we aim to segment all objects and stuff in a bottom-up way. The only instance segmentation quantitative results reported in the SAM paper (Table 5) requires taking boxes from an external trained detector ViTDet.
2. Besides, SAM built their own large-scale dataset with more than 1 billion masks (SA-1B) by designing a data engine consisting of model-assisted manual annotation and semi-automatic stage, which is very costly; while we only utilize multiple publicly available datasets and train a joint segmentation model to improve the performance. Hence, our DaTaSeg and SAM are not comparable in terms of the training data.
3. Third, except for the explicitly trained text-to-mask task, SAM's outputs are class-agnostic binary masks (the SA-1B datasets are also class-agnostic), while our model outputs the class predictions together with the mask predictions.
>**Questions: Proofreading:**
Thanks! We will fix these typos and do a thorough proofreading.
>**Limitations: Computational cost and open-sourcing:**
- We thank the reviewer for the suggestion. Indeed, we plan to open-source the code and release the models in various sizes, upon acceptance of the paper.
- We use a longer training schedule because we are cotraining on the much larger Objects365 dataset (1,662,292 training images), which is 14 times larger than COCO. In order to balance the samples from different datasets, we train longer on COCO and ADE.
- We have some explanation about the computational efficiency in supplementary Sec. G, including several techniques to improve it. We leave it for future work to further improve the efficiency. | null | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: This paper introduces DaTaSeg, a universal multi-dataset multi-task segmentation model. It uses a shared representation for panoptic, semantic, and instance segmentation tasks, with different techniques to address task differences. Weak supervision and knowledge sharing are employed. DaTaSeg improves performance on various datasets, especially small-scale ones. It enables knowledge transfer and open-vocabulary segmentation. An Objects365 instance segmentation set of 1,000 images will be released as a benchmark.
Strengths: 1. It is noticed that training segmentation models on multiple datasets to obtain better results is difficult. It is more challenging to train segmentation models on multiple datasets for multiple tasks (instance segmentation, semantic segmentation, weakly supervised instance segmentation. The performance of DaTaSeg on COCO and ADE shows the effectiveness of the proposed approach
2. The paper leverages box-level supervision to improve the segmentation performance.
Weaknesses: 1. [minor] The definition of mask proposal is commonly used in instance segmentation [1]. However, the description uses too many math terms to describe a simple concept that can be demonstrated by natural language, figures, and more compact math. This is not good for readers to understand.
2. [major] The results of weakly supervised instance segmentation look not promising. In the previous state-of-the-art method, weakly supervised instance segmentation methods achieves 70-90% performance (mAP) as what their fully supervised versions do [2,3]. However, in the 2, the instance segmentation of Object365 looks not promising, only 10ish%. What is the upper-bound performance of O365 instance segmentation?
3. [major] The approach introduces the unified mask representation without comparison or discussion with other mask representations. It is well-known that training multi-dataset or multi-tasks is not trivial, and it is better to show more training details and empirical design considerations in the paper.
[1] run-length encoding: https://github.com/cocodataset/cocoapi/blob/master/PythonAPI/pycocotools/coco.py#L265
[2] Lan, Shiyi, et al. "Vision transformers are good mask auto-labelers." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2023.
[3] Li, Wentong, et al. "Box2Mask: Box-supervised Instance Segmentation via Level-set Evolution." arXiv preprint arXiv:2212.01579 (2022).
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: None
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 3 good
Contribution: 4 excellent
Limitations: None
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for recognizing our contributions and the helpful feedback. We carefully address the comments below.
>**W1: Mask proposal:**
- We agree with the reviewer that the mask proposal is commonly used in instance segmentation. However, mask proposals are used differently in different segmentation tasks (e.g., one mask proposal corresponds to one instance mask in instance segmentation, but it may correspond to one amorphous stuff region in semantic or panoptic segmentation). Therefore, Sec. 3.2 is mainly about how we novelly apply *different* merge operations for *different* segmentation tasks. We briefly introduce the mask proposal representation in L121 in a math format, in order to use it in Eq. 1 and 2, which are our novelly proposed MERGE operation. We thank the reviewer for the suggestion and will improve the presentation.
>**W2: Weakly supervised instance segmentation:**
We thank the reviewer for the question, and address it below.
- First, the reason why the scores on O365 are generally low in absolute number is because: O365 has 365 categories with imbalanced distribution [A]; hence, it is more challenging than detection datasets on common categories, e.g., COCO.
- Second, we also evaluate DaTaSeg's class-agnostic mask average recall (AR@100) on O365. DaTaSeg achieves **32.8** and **34.5** AR@100 using ResNet50 and ViTDet-B backbones, respectively, which is not low.
- Third, since there is no O365 instance segmentation training set available (it is one of our contributions to annotate 1000 O365 images with instance segmentation annotation, used as a validation set for weakly supervised instance segmentation), we do not have a fully-supervised upper bound. That being said, we have also tried our best to find an **approximate** upper-bound performance: in Detic [A], their fully-supervised detector on Objects365 is 31.2 box AP with a Swin-B backbone, which additionally uses several techniques (e.g., Federated loss [B] and repeated factor sampling [C]) to address the category imbalance issue. Since mask AP is generally several points lower than box AP (e.g., in Detectron2 Mask R-CNN benchmark, COCO box AP can be 5-6 points higher than mask AP), our performance on the comparable ViTDet-B backbone of 16.1 mask AP is reasonable.
- Finally, we thank the reviewer for the pointers. We will include [2,3] in the related work section. Ours and [2,3] focus on different aspects. [2] first generates high-quality pseudo masks using groundtruth bounding boxes, and then trains an instance segmentation model. Its pipeline is more complicated than ours. [3] proposes several complex modules specifically for box-supervised instance segmentation, while our framework is simpler and tackles multiple segmentation tasks simultaneously.
>**W3: Comparison with other mask representations:**
- For the mask representation, we explain our motivation in L110-113.
- Another commonly used mask representation is first getting the bounding box prediction, and then predicting the mask within the predicted bounding-box region, as in Mask R-CNN. However, this representation is not suitable for stuff categories, e.g., it is unnatural to get a single bounding box for multiple grass fields. As a result, some methods such as [D] additionally add a semantic segmentation branch for stuff categories. By contrast, in the mask representation stage, we did not treat thing and stuff categories differently. We explain the reason in L116 — different datasets have different definitions for things and stuff, e.g., the 'table' category; so treating thing and stuff mask representation separately is unsuitable. Instead, we apply different merge operations and postprocessing in the later stage.
- We agree with the reviewer that *”it is better to show more training details”*. We have already provided more empirical design considerations in the supplementary Sec. D, E, and more implementation details in supplementary Sec. F.
[A] Zhou, Xingyi, et al. "Detecting twenty-thousand classes using image-level supervision." ECCV 2022.
[B] Zhou, Xingyi, and Philipp Krähenbühl. "Joint COCO and LVIS workshop at ECCV 2020: LVIS challenge track technical report: CenterNet2." 2020.
[C] Gupta, Agrim, Piotr Dollar, and Ross Girshick. "Lvis: A dataset for large vocabulary instance segmentation." CVPR 2019.
[D] Kirillov, Alexander, et al. "Panoptic feature pyramid networks." CVPR 2019. | null | null | null | null | null | null |
Towards Efficient Pre-Trained Language Model via Feature Correlation Distillation | Accept (poster) | Summary: This paper introduces a new technique for compression the PLM. Within the context of knowledge distillation, the authors introduce a new type of relation distillation, which focuses on modeling relations between token features at both the token-level and sentence-level. These relations are then utilized as an objective function for the teacher-student distillation process. Additionally, the authors propose a correlation-based distillation loss as a replacement for the previous distillation loss function that relied on exact match as the target. Experimental results show that FCD achieves significant improvements over the previous methods.
Strengths: 1. A clear motivation for the method: resolving the mismatch in size during alignment between the teacher and the student, such as the number of heads or the dimension of the hidden state. The authors propose modeling both token-level and sample-level relationships, which remains consistent between the teacher and the student models, regardless of their model design. This approach can help to minimize the impact of alignment mismatch.
2. The method offers a simple and effective solution for distillation, supported by proof that demonstrates the relationship between PLC, KLD, and MSE.
3. The presentation of method is clear and easily comprehensible.
4. It is remarkable to observe that FCD, a task-specific distillation approach, surpasses the performance of task-agnostic KD with pre-training, like TinyBERT and MiniLMv2.
Weaknesses: 1. The idea of sample-level relation loss appears unreasonable to me. The approach of comparing the j-th token in each sentence and using the dot-product similarity to represent sentence-level relationship seems unreasonable. It raises doubts about whether the relationships between tokens in the same position, but in different sentences, truly reflect the relationships between the sentences and are directly comparable. For instance, consider the following example:
S1: For simplicity and clarity, our paper focuses...
S2: For your safety, please ensure that you are wearing protective...
If we focus on the first token, they are the same, yet the sentences have completely different meanings.
2. I have some doubts regarding the experimental results and would appreciate further clarification from the authors to determine if any potential weaknesses exist. Please refer to questions 1, 2, and 3.
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: 1. Upon reviewing the results in Table 1, I noticed that none of the reported results on the test set on GLUE benchmark, including BERT_base, are consistent with those presented in their original paper. Some results are higher than the previous paper, while others are lower. This raises the question of whether all the baseline results were reproduced or if there were modifications to the student structure or different settings employed. Could you clarify why there are changes to the baselines results?
2. The baseline "BERT_SMALL6" involves dropping some layers and fine-tuning on the downstream datasets without knowledge distillation (KD), which can be interpreted as the L_g loss in Eq.10. However, I noticed a significant discrepancy between the reported SST-2 result of 90.7% and the result shown in Figure 3.c's ablation experiments, where using only L_g on SST-2 yields approximately 86%. Could you explain the reason behind this substantial gap between the two results?
3. The motivation for FCD is to address the size misalignment problem; however, in the conducted experiments, both the teacher and student models maintain the same structure, except for dropping certain layers. Consequently, there doesn't appear to be a misalignment issue in dimension and the number of attention aheads. Could you provide some experiments on the student model will less attention head and also reduced dimension?
4. Can you provide with some explanation why the sample-level relation loss can aid in distillation?
5. Have you attempted to build FCD upon released distilled models with pre-training, such as TinyBERT (e.g., TinyBERT_General_
*L_*D)? It will be interesting to see combining these two techniques together to boost the performance of the distilled model.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 3 good
Contribution: 3 good
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your valuable comments and efforts in reviewing our paper. We address your comments and questions in the following content.
> Weakness 1: The concept of sample-level relation loss seems unreasonable. Comparing the j-th token in each sentence to determine sentence-level relationships is questionable. For example, sentences "S1" and "S2" both start with the same token but convey different meanings.
Response to W1: Indeed, the example you provided highlights that individual token similarity may not always reflect the holistic meaning of sentences. However, our proposed sample-level relation loss, as described in Eq.9, is not solely based on one isolated token comparison. Instead, it focuses on an aggregate assessment across all tokens in a sequence. To put it simply, our approach isn't just about direct token-to-token comparison but rather the interaction and aggregation of these tokens across an entire sentence.
Consider the following customer review examples:
S1: "The camera quality is outstanding..."
S2: "The battery life is inadequate..."
S3: "The display clarity is excellent..."
If we assess token positions across these sentences, we observe patterns associated with product attributes (e.g., tokens in the third position) and sentiment (e.g., tokens in the fifth position). This suggests that there's a relationship not only in the individual tokens but in the context they create. Thus, we define sentence-level relationships with token-wise relation matrices across all positions and model them in a nuanced and meaningful manner.
> Question 1: In Table 1, the results for the GLUE benchmark, including BERT_base, differ from the original paper. Were the baseline results reproduced as is, or were there changes to the student structure or settings? Please explain the discrepancies.
Response to Q1: Thank you for noting the discrepancies in Table 1. We used public pre-trained models for BERT_base, DistilBERT, and TinyBERT. For MiniLM v1 and v2, we re-implemented them due to differences in their publicly available structures and the absence of test set results. All models were fine-tuned using consistent settings. The variations from original papers might be due to training randomness and specific training details, like the omission of data augmentation used in the TinyBert.
> Question 2: The baseline "BERT_SMALL6" involves dropping some layers and fine-tuning on the downstream datasets without KD, which can be interpreted as the L_g loss in Eq.10. However, I noticed a significant discrepancy between the reported SST-2 result of 90.7% and the result shown in Figure 3.c's ablation experiments, where using only L_g on SST-2 yields approximately 86%. Could you explain the reason behind this substantial gap between the two results?
Response to Q2: We appreciate your careful attention to this discrepancy in our paper. The difference arises from the configurations in Figure 3.c where we applied both L_g and L_s, in contrast to BERT_SMALL6 which only utilized L_g. Upon revisiting our setup, we identified that the beta weight factor was mistakenly set too high, causing an imbalance between the losses L_s and L_g.This misconfiguration resulted in the observed performance gap. After correcting this and re-running our experiments, we achieved an average SST-2 score of 91.9% over four runs. We will amend Figure 3.c in the revised version to reflect this.
> Question 3: Could you provide some experiments on the student model will less attention head and also reduced dimension?
Response to Q3: Thank you for your valuable suggestion. We conducted experiments on student model with reduced attention heads and dimensions. Specifically we set student models to be 6-layer, 6-head, with a hidden size of 384. Here are the results:
| Model | MNLI-m | SST-2 |
| ------ | ----- | --------|
| (Teacher) BERT_Base | 84.5 | 93.4
| TinyBert_6H_6L_384 | 79.8 | 88.2
| MiniLMv2_6H_6L_384 | 81.2 | 89.6
| FCD_6H_6L_384 | 81.8 | 90.3
FCD's consistent outperformance highlights its capability in alleviating this misalignment problem, which aligns with the motivation of our approach. While the performance of these compact models doesn't reach our original paper's results, potentially due to lack of pre-trained initialization, integrating strategies like attention head pruning could further optimize results. We will incorporate these results in the updated version.
> Question 4: Can you provide with some explanation why the sample-level relation loss can aid in distillation?
Response to Q4: Please see our response to W1. Let us know if it does not answer your question.
> Question 5: Have you attempted to build FCD upon released distilled models with pre-training, such as TinyBERT (e.g., TinyBERT_General_ *L_*D)? It will be interesting to see combining these two techniques together to boost the performance of the distilled model.
Response to Q5: Thank you for posing this insightful question. We explored the idea by building FCD upon pre-trained TinyBERT models and subsequently fine-tuned them on downstream tasks without data augmentation. Here's a summary of the results:
| Model | RTE | SST-B |
| ----------- | ----------- | ----------- |
| (Teacher) BertBase | 66.8 | 85.2
| TinyBert_6L_768 | 70.0 | 83.9
| TinyBert_4L_312 | 64.1 | 80.4
| (FCD) TinyBert_6L_768 | 71.9 | 85.4
| (FCD) TinyBert_4L_312 | 67.3 | 81.1
Our experiment shows that FCD combined with general distillation not only outperforms the regular TinyBERT model on RTE and SST-B tasks but also those reported in our original paper, which suggests that initializing the student model with general distillation can provide a robust foundation for further fine-tuning with our FCD method. This observation aligns well with the findings presented in other recent work [1].
References:
[1] Lu, Chengqiang, et al. "Knowledge Distillation of Transformer-based Language Models Revisited." arXiv preprint arXiv:2206.14366 (2022).
---
Rebuttal Comment 1.1:
Comment: Thanks for your detailed response.
I still feel that comparing the same positions between different samples as a way to establish sample-level relations is not quite reasonable. The examples you provided share the same structure, allowing for the comparability of words at the same positions. However, differences in sentence structure, syntax, and semantics can lead to a lack of direct comparability between words at the same positions across sentences. As such, the method of comparing words at the same positions across different samples as a means of establishing sample-level relations could be constrained.
Regarding the aggregation of sentence-level information, a common approach in KD involves utilizing the [CLS] representation as the aggregator. This offers computational efficiencies with a complexity of only B^2D, which is lower than B^2ND. Do you have results comparing this one with your methods?
---
Reply to Comment 1.1.1:
Comment: > Q1: I still feel that comparing the same positions between different samples as a way to establish sample-level relations is not quite reasonable. The examples you provided share the same structure, allowing for the comparability of words at the same positions. However, differences in sentence structure, syntax, and semantics can lead to a lack of direct comparability between words at the same positions across sentences. As such, the method of comparing words at the same positions across different samples as a means of establishing sample-level relations could be constrained.
Response to Q1: We agree with the potential concerns of comparing tokens at the same positions across varied samples, especially when considering the inherent differences in sentence structure, syntax, and semantics.
To provide a clearer perspective: while our examples might give the impression of a direct token-to-token comparison, in fact, the term "token" in our paper refers to high-dimensional representations of words situated within a D-dimensional embedding space. It's vital to underline that there isn't a straightforward one-to-one correspondence between input words and their feature positions.
These high-dimensional representations capture more than just the standalone meaning of a word. They also integrate its surrounding context and interactions with other words in the sentence. Within this expanded feature space, these embeddings can discern subtle relationships across different samples, transcending mere surface-level token comparisons. Our empirical results showcased in Section 4.4 further validate the efficacy of these high-dimensional, sample-level feature relationships in enhancing the student model's distillation process.
Moreover, we believe that exploring techniques like dimensionality reduction may offer a clearer depiction of the intricacies inherent in these high-dimensional relation maps. We truly appreciate your insightful feedback, which will certainly guide our efforts in refining and enriching our paper.
> Q2: Regarding the aggregation of sentence-level information, a common approach in KD involves utilizing the [CLS] representation as the aggregator. This offers computational efficiencies with a complexity of only B^2D, which is lower than B^2ND. Do you have results comparing this one with your methods?
Response to Q2: Thank you for drawing attention to the prevalent use of the [CLS] representation in Knowledge Distillation (KD) as an aggregator for sentence-level information. You're right; it's a widely favored method in the community, given its computational efficiencies. In our research, we ventured into this approach. To leverage the [CLS] representation, we extracted the first position from the sample-level relation matrix of size N x B x B, referencing R_s[0, :, :] as per Eq.4. In this setting, we exclusively applied the sample-level relation loss L_s to evaluate the impact of the [CLS] representation. The results are presented in the table below:
| Model | MRPC | SST-2 |
| ----------- | ----------- | ----------- |
| (Teacher) BertBase | 88.3 | 93.4
| [CLS] token only | 84.3 | 91.4
| Ours | 84.7 | 91.9
While the [CLS]-only method offers certain computational advantages, our method underscores the significance of interaction and the aggregation of all tokens within a sentence. As our results indicate, our approach potentially enhances distillation performance. It's worth noting that although focusing solely on sample-level relation maps reduces computational complexity, the overall savings in resources are not substantial. This is mainly because token-level relation maps, which tend to consume more computational resources, overshadow the savings gained from the sample-level relation maps, sometimes even at the expense of some performance trade-offs. | Summary: This paper proposes Feature Correlation Distillation (FCD), an approach for distilling pre-trained transformers. FCD involves a two-part distillation loss: (1) token-level and (2) sample-level, which helps to eliminate dependence on matching dimensionality/architectural details between the student and teacher and improves efficiency by constant factors. The paper also proposes replacing the KL-divergence or MSE based objective with a pearson correlation owing to its invariance to positive linear transformations. Experiments were run on GLUE with bert, roberta and distilgpt2, including comparing KL, MSE and correlation loss objectives and ablating the token and sample level losses.
Strengths: The paper proposes an interesting approach to distillation that relies on model internals while allowing flexibility in architectural choices. It proposes a more efficient distillation procedure which can be useful in resource constrained settings. The paper also makes an effort to empirically justify the chosen loss types and objectives via ablations.
Weaknesses: GLUE tasks can have a large amount of variance due to randomness and due to small test sets, single-run scores may not be reliable (see e.g. https://arxiv.org/pdf/1904.09286.pdf: Section 4.2 note on variance, https://arxiv.org/abs/2010.06595 on small test sets). So, at the very least, averaging across random seeds and including errors bars in Figure 3 is important to understand if the results are robust. Since these experiments provide the only evidence for the claims and since the claims are largely empirical, adding robustness to the experiments seems crucial.
Nits:
The citation formatting appears to be off: try using \citep and \citet with natbib.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Would it be possible to expand the set of experiments to more tasks? Providing more varied empirical evidence for the approach could make the paper stronger.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: NA
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your valuable comments. We address your comments and questions in the following content.
> Weakness 1: GLUE tasks can have a large amount of variance due to randomness and due to small test sets, single-run scores may not be reliable (see e.g. https://arxiv.org/pdf/1904.09286.pdf: Section 4.2 note on variance, https://arxiv.org/abs/2010.06595 on small test sets). So, at the very least, averaging across random seeds and including errors bars in Figure 3 is important to understand if the results are robust. Since these experiments provide the only evidence for the claims and since the claims are largely empirical, adding robustness to the experiments seems crucial. Nits: The citation formatting appears to be off: try using \citep and \citet with natbib.
Response to W1: Thank you for your insightful feedback. We will address your concerns in two parts:
Robustness in Experiments: We absolutely agree with with the points raised about the inherent variance in GLUE tasks and the potential implications for the reliability of single-run scores. All of our current results are already averaged across four different runs (as indicated in Table 1), and we acknowledge that this might not be sufficiently clear in Figure 3. In addition, in our revised paper, we will augment Figure 3 with error bars that capture the variability across these runs, providing a more comprehensive view of the robustness of our results.
Citation Formatting: We're grateful for the feedback on our citation formatting. We'll make the necessary adjustments and ensure the use of \citep and \citet with natbib for consistent and standard citation representation in the updated version of the paper.
> Question 1: Would it be possible to expand the set of experiments to more tasks? Providing more varied empirical evidence for the approach could make the paper stronger.
Response to Q1: Thank you for your valuable suggestion. Broadening the range of tasks evaluated is indeed beneficial for solidifying our empirical evidence. In addition to the tasks mentioned in the main paper, we have extended our experiments to SQuAD 1.1 and 2.0, which have been available in our supplementary materials. Moreover, based on suggestions from reviewer XAJW, we've tested our approach on the WikiText-103 benchmark for generation task. Our model showcased an improved perplexity of 19.8 against distilgpt2's 21.1. This finding underscores the versatility of our methodology, even beyond classification tasks. We're open to expand our empirical evaluation further, based on the tasks recommended by the reviewers, to cement our approach's validity across diverse NLP applications. Your guidance in this regard is invaluable.
| Model | SQuAD 1.1 | SQuAD 2.0 |
| ------ | ------- | ------ |
| (Teacher) BERT_Base | 88.7 | 78.8
| DistilBERT | 86.2 | 69.5
| TinyBERT | 87.5 | 77.7
| Ours | 88.2 | 78.4
| Model | WikiText-103
| ------ | ------- |
| (Teacher) GPT2 | 16.3
| distilgpt2 | 21.1
| Ours | 19.8
---
Rebuttal Comment 1.1:
Title: Thanks for the response
Comment: Thanks for providing details! Reliability of experiments was my biggest gripe in the initial review which has been addressed in the response. I've raised my score to reflect this!
---
Reply to Comment 1.1.1:
Comment: Dear Reviewer kWDa:
Thanks for your feedback!
Best,
Paper3038 Authors | Summary: This paper proposes a method for compressing pre-trained language models (PLMs) based on transformer architectures using feature correlation distillation (FCD). FCD models both token-level and sample-level relations between the teacher and student models, and uses a correlation-based loss function to relax the exact match constraint of traditional knowledge distillation methods. FCD achieves strong performance on the the GLUE benchmark with the teacher of the BERT and RoBERTa.
Strengths: A novel perspective to introduce sample-level information to help distillation. The design of learning objective is also interesting.
Weaknesses: The token-level method is very similar to the attention-based method of Minilm. What is the essential difference? Also, this paper selects the last hidden before the prediction head as the feature, but is this really appropriate? Because the last hidden is too close to the output, a lot of information will be lost. Previous work (e.g., kNN-LM) mentioned that using the feature before the final layer’s FFN might be better.
Although the sample-level idea sounds interesting, it doesn’t seem to make much sense. I’m not very clear why the sample-level information in a batch would be useful. If this is really useful, then not only in the distillation setting, but also in general training, sample-level information should be used (such as batch-norm methods). Otherwise, it is not very convincing to say that sample-level information is only useful in distillation.
The results of previous work reported in the paper are very inconsistent with the numbers reported in previous work. For example, when comparing with Minilm v2, the result numbers in Minilm v2 and those reported in this paper are quite different, which makes me concerned about whether the comparison is fair and whether the conclusions drawn from the comparison are convincing.
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: Since this work has done GPT-2 distillation, why not also verify the effect on some generation tasks?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 3 good
Contribution: 2 fair
Limitations: See the weakness and question section.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your valuable comments. We address your comments and questions in the following content.
> Weakness 1: The token-level method is very similar to the attention-based method of Minilm. What is the essential difference? Also, this paper selects the last hidden before the prediction head as the feature, but is this really appropriate? Because the last hidden is too close to the output, a lot of information will be lost. Previous work (e.g., kNN-LM) mentioned that using the feature before the final layer’s FFN might be better.
Response to W1: Thank you for your valuable comments. We will address your concerns in two parts:
Distinction from MiniLM: MiniLM’s attention is built upon the scaled dot-product interaction among queries, keys, and values, resulting in an attention matrix of size H x N x N (where H represents attention heads and N is the sequence length). This design inherently limits flexibility, especially when adjusting head counts. On the other hand, our approach forms a token-level relation matrix of size N x N via block features, offering more flexibility across models with varied attention heads.
Feature Selection: We have added experiment to evaluate both pre and post-FFN features, and the results are summarized in the following table.
| Model | MNLI-m | SST-2 |
| ----------- | ----------- | ----------- |
| (Teacher) BERT_Base | 84.5 | 93.4
| pre-FFN | 83.6 | 92.5
| post-FFN | 83.8 | 92.8
Our results indicate a slightly superior performance using post-FFN features. It's worth noting the divergence with kNN-LM; however, we're addressing different tasks, i.e., language modeling in kNN-LM versus our focus on distillation. A parallel observation was made with Vision Transformer (ViT) studies [1].
References:
[1] Yang, Zhendong, et al. "Vitkd: Practical guidelines for vit feature knowledge distillation." arXiv preprint arXiv:2209.02432 (2022).
> Weakness 2: Although the sample-level idea sounds interesting, it doesn’t seem to make much sense. I’m not very clear why the sample-level information in a batch would be useful. If this is really useful, then not only in the distillation setting, but also in general training, sample-level information should be used (such as batch-norm methods). Otherwise, it is not very convincing to say that sample-level information is only useful in distillation.
Response to W2: Your concerns about the sample-level concept's generalizability are valid. In essence, the sample-level approach examines inter-sample relationships within a batch. For instance, considering three customer reviews:
S1: "The camera quality is outstanding..."
S2: "The battery life is inadequate..."
S3: "The display clarity is excellent..."
By comparing specific token positions across these samples, we gain valuable insights into the features and sentiments in various reviews. For instance, the tokens "quality," "life," and "clarity" in the third position provide a nuanced understanding of product attributes. Comparing tokens at the 5th position, i.e., "outstanding", "inadequate", and "excellent", could offer complementary insights for tasks such as sentiment classification, where the degree of positivity or negativity is crucial.
While our study emphasized its utility in distillation, the potential of sample-level information isn't limited to this context. It has broader applications, as seen in certain computer vision tasks where batch relationships are explored during training, For example, [1] proposed BatchFormer module to model the relationships between different samples. Hence, its usefulness isn't exclusive to distillation, and future work can delve deeper into its applicability in standard training regimes.
References:
[1] Hou, Zhi, Baosheng Yu, and Dacheng Tao. "Batchformer: Learning to explore sample relationships for robust representation learning." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2022.
> Weakness 3: The results of previous work reported in the paper are very inconsistent with the numbers reported in previous work. For example, when comparing with Minilm v2, the result numbers in Minilm v2 and those reported in this paper are quite different, which makes me concerned about whether the comparison is fair and whether the conclusions drawn from the comparison are convincing.
Response to W3: We acknowledge the inconsistency you've noticed. The primary reason for the variation is that our results were reported on the test sets of GLUE, while MiniLM v2's reported outcomes were from the development sets. To ensure fairness in comparison, we've re-implemented MiniLM v2 and evaluated it on the test sets.
> Question 1: Since this work has done GPT-2 distillation, why not also verify the effect on some generation tasks?
Response to Q1: Thanks for your suggestion. While our primary focus was on distillation for classification tasks, exploring generative tasks would provide a more comprehensive understanding of our distillation approach's effectiveness. We did experiment with the WikiText-103 benchmark. Our model outperformed with a perplexity of 19.8 compared to distilgpt2's 21.1. This result suggests our method's potential applicability and benefits in the domain of generative tasks as well. We will add this result in the revised version to complement our evaluation.
---
Rebuttal Comment 1.1:
Comment: Thank you for the response. I still have some concerns:
> Distinction from MiniLM: MiniLM’s attention is built upon the scaled dot-product interaction among queries, keys, and values, resulting in an attention matrix of size H x N x N (where H represents attention heads and N is the sequence length). This design inherently limits flexibility, especially when adjusting head counts
Based on my understanding, it seems that MiniLMv2 doesn't have the limitation that requires the same number of head counts.
> While our study emphasized its utility in distillation, the potential of sample-level information isn't limited to this context. It has broader applications, as seen in certain computer vision tasks where batch relationships are explored during training, For example, [1] proposed BatchFormer module to model the relationships between different samples. Hence, its usefulness isn't exclusive to distillation, and future work can delve deeper into its applicability in standard training regimes.
It's interesting to see some research studying sample-level training approaches. However, I'm curious to know more about why this training approach is not widely used in practice.
> We did experiment with the WikiText-103 benchmark. Our model outperformed with a perplexity of 19.8 compared to distilgpt2's 21.1.
Could the authors give more details? As far as I knowledge, it is very challenging for a distilled model to achieve superior results in terms of perplexity in the task of language modeling.
---
Reply to Comment 1.1.1:
Comment: We sincerely thank you for your efforts in reviewing our responses.
> Question 1: Based on my understanding, it seems that MiniLMv2 doesn't have the limitation that requires the same number of head counts.
Response to Q1: MiniLMv2 has indeed designed mechanisms to navigate the limitation related to a consistent number of head counts. Specifically, it uses a method where self-attention vectors from different attention heads are first concatenated and then split according to the desired number of relation heads. While those tricks could mitigate this limitation in MiniLMv2, they also involve additional computational operations for queries, keys, and values within the self-attention mechanism, as highlighted in our original paper. In contrast, our method for deriving the sample or token relationship matrix is intrinsically invariant of the number of attention heads, thus circumventing the need for such additional computations.
> Question2: It's interesting to see some research studying sample-level training approaches. However, I'm curious to know more about why this training approach is not widely used in practice.
Response to Q2: To our best knowledge, the exploration of sample-level relationships in research is a relatively recent development. While the promise of this method is clear, a contributing factor to their restrained adoption in practice, we speculate, is the added complexity they introduce. For instance, BatchFormer require the design of specialized modules to model relationships between different samples and ours need extra relation matrix computation and the formulation of a robust loss function to capture these sample-level relationships. Furthermore, the performance advantages of such approaches are predominantly evident in specific tasks like distillation, long-tailed recognition, and few-shot learning. These domains pose challenges that are often more intricate than common tasks like classification. Nonetheless, we believe that how to efficiently incorporate sample-level relationships during training is a promising direction for future research.
> Question3: Could the authors give more details? As far as I knowledge, it is very challenging for a distilled model to achieve superior results in terms of perplexity in the task of language modeling.
Response to Q3: We began by initializing our distilled model with a 6-layer structure from a 12-layer GPT-2 model. We employed the Pseudo-uniform selection technique, which originates from the DistilBert paper. Then, our model was pre-trained using a cleaned version of the OpenWebText dataset. Given the detrimental impact of noisy samples, we inspected each sample, removing HTML codes, filtering out short sentences, discarding sentences with a high proportion of non-alphanumerical characters, and any duplicates. This intensive cleaning process ensures a higher-quality dataset for our model. Following pre-training, as we did in the original paper, we perform task-specific distillation with FCD on the Wikitext103 dataset. We evaluate our model's performance using post-fine-tuning perplexities (PPL). For this experiment, we deployed 8 A100 GPUs and harnessed the DeepSpeed framework to speed the experimentation process. | Summary: This paper proposes a new knowledge distillation method named Feature Correlation Distillation (FCD) for compressing large pre-trained language models. The proposed method novelly uses token-level and sample-level relationship between teacher and student models and a Pearson linear correlation-based loss function to relax previous exact match restrictions from KL divergence and MSE loss. Extensive experiments show that the proposed method outperforms baseline methods on GLUE thanks to the proposed objectives.
Strengths: - The idea is novel and motivated by great intuitions, well explained in the paper.
- The authors provided computation complexity analysis, which is very useful to understand the efficiency and complexity of the proposed method. This part is not usually presented in most papers.
- The authors performed well-organized experiments and ablation studies, showcasing the benefits from FCD.
Weaknesses: This paper really don't have obvious weaknesses. The only one might the lack of a limitation section.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: Would it be possible to also demonstrate the efficacy of FCD theoretically or intuitively (e.g., through visualization)? The paper is already very good with empirical results, providing more evidence on why such a loss formation works well can be even better.
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 4 excellent
Limitations: Not discussed, the authors should add a section.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your positive assessment and valuable suggestion.
> Question 1: Would it be possible to also demonstrate the efficacy of FCD theoretically or intuitively (e.g., through visualization)? The paper is already very good with empirical results, providing more evidence on why such a loss formation works well can be even better.
Response to Q1: Thank you for your suggestion; it's indeed vital to complement empirical results with intuitive explanations. In our paper, we dedicated Section 3.4 to offer a theoretical rationale behind the effectiveness of the loss function, which is central to the FCD mechanism. Additionally, for a deeper dive, we've included in the supplementary materials both a detailed mathematical justification and a visualization illustrating the relational features before and after normalization (see Figure 5). Inspired by your feedback, we're now exploring the possibility of visualizing the loss and gradient landscapes. This will further elucidate how the loss function operates from an optimization perspective.
---
Rebuttal Comment 1.1:
Comment: Thank you for your response.
---
Reply to Comment 1.1.1:
Comment: Dear Reviewer NpzH:
Thanks for your feedback!
Best,
Paper3038 Authors | Rebuttal 1:
Rebuttal: We thank all the reviewers for their valuable feedback. The detailed responses to the reviewers’ comments will be replied directly to each reviewer. | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: This paper explores the task-specific knowledge distillation from a large teacher model which are pre-trained language models (PLMs) such as BERT-large or BERT-base into a student which is always smaller than the teacher model. For example, the teacher can be a 12-layer of BERT-base model while the student model only has 6-layer. One of challenges for knowledge distillation is that the feature dimensions and number of attention heads are different, and it requires additional transformation to match both the outputs of teacher and student models.
Authors propose to build token-level and sample-level relationship from feature representations, and then compare these relations between teacher and student models. They further advocated to use a correlation-based loss function over KL or MSE. The proposed approach is evaluated on the GLUE benchmark showing the effectiveness of their method.
Strengths:
- The idea of token-level and sample-level feature mapping between teachers and students sounds interesting.
- It also provides a strong empirical result.
- Ablation study also shows that importance of different components.
Weaknesses: - It is hard to understand why sample-level matching helps. An intuitive explanation may be required.
- For the sample-level approach, it is computed within a mini-batch. How the performance is sensitive to batch-size?
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: - line 292 says it is trained for 20 epochs which seems much longer than standard finetuning. Is it because the proposed approach is slow to converge?
- In Figure 4 (6.1), it is required to define B first before use in the function (token_relation_loss).
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: n/a
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your valuable comments and questions. We are revising our rebuttal revision to address your concerns.
> Weakness 1: It is hard to understand why sample-level matching helps. An intuitive explanation may be required.
Response to W1: Thanks for your comments. We assume that tokens at the same position may be related more closely in specific contexts and tasks. Consider the task of sentiment analysis in customer reviews for products. Here's a set of examples:
Sentence 1: "The camera quality is outstanding, making photography a delight."
Sentence 2: "The battery life is inadequate, making long-term use problematic."
Sentence 3: "The display clarity is excellent, making visual experience immersive."
By analyzing the relationships between samples in corresponding positions, we can discern several insights:
* Contextual Relationship: We focus on the tokens in the 3th position, which are "quality", "life", and "clarity". These tokens, when considered in the context of product reviews, may offer valuable insights into different aspects that customers care about. By comparing these tokens, we could gain a deeper understanding of the particular features being praised or criticized in different reviews.
* Subject Relationship: Despite the similarity in sentiment between sentences 1 and 3, the subjects are different ("camera" and "display"), which indicates that the positive sentiment is directed towards different entities. Analyzing the relationships between these tokens may be valuable for tasks that require understanding subjects and their associated sentiments or actions.
* Sentiment Relationship: The tokens at the 5th position are "outstanding", "inadequate", and "excellent", which clearly relate to the sentiment expressed. Comparing these could indeed provide complementary knowledge for tasks like sentiment classification, where understanding the degree of positivity or negativity is crucial.
In our method, the relationships between tokens in the same position across different sentences are not meant to represent entire sentence meanings but to highlight specific aspects that may be relevant to certain tasks. By carefully choosing the context or domain, this approach can provide valuable insights and improve performance in those targeted areas.
> Weakness 2: For the sample-level approach, it is computed within a mini-batch. How the performance is sensitive to batch-size?
Response to W2: Thank you for raising this concern. We evaluated our method with batch sizes of 8, 16, 32, and 64, observing the following performance on MNLI-m and QNLI:
| Batch Size | MNLI-m | QNLI |
|------------|--------|------|
| 8 | 83.3 | 90.9 |
| 16 | 83.7 | 91.2 |
| 32 | 83.8 | 91.3 |
| 64 | 83.9 | 91.3 |
The results indicate a robust performance across various batch sizes. However, notably, there is a moderate performance drop with extremely small batch sizes (i.e., less than 8).
> Question1: line 292 says it is trained for 20 epochs which seems much longer than standard finetuning. Is it because the proposed approach is slow to converge?
Response to Q1: Thanks for pointing out this. It is essential to clarify that while we mentioned 20 epochs in line 292, this does not imply a generally slow convergence for our proposed approach across all tasks. Specifically, the 20 epochs were adopted for the CoLA task, which can be more challenging and can benefit from extended finetuning. On the other hand, for tasks like QQP and MNLI, our method converged efficiently within just 3-5 epochs. Such variance in epoch settings, depending on the complexity of tasks, is aligned with practices observed in prior research as well [1][2]. We will emphasize this distinction more clearly in the revised paper to avoid any misunderstanding.
References:
[1] Wang, Wenhui, et al. "Minilmv2: Multi-head self-attention relation distillation for compressing pretrained transformers." arXiv preprint arXiv:2012.15828 (2020).
[2] Jiao, Xiaoqi, et al. "Tinybert: Distilling bert for natural language understanding." arXiv preprint arXiv:1909.10351 (2019).
> Question2: In Figure 4 (6.1), it is required to define B first before use in the function (token_relation_loss).
Response to Q2: Thanks for pointing out this oversight in the supplementary material. In our implementation, B was defined within the forward method and was intended to be passed to the token_relation_loss function. We have rectified this in the revised paper.
---
Rebuttal Comment 1.1:
Comment: Thanks for your detailed response.
---
Reply to Comment 1.1.1:
Comment: Dear Reviewer qBd9:
Thanks for your feedback!
Best,
Paper3038 Authors | null | null | null | null | null | null |
Bayesian Active Causal Discovery with Multi-Fidelity Experiments | Accept (poster) | Summary: The authors introduce the task of active causal discovery with multi-fidelity oracles which they go onto show is superior to many state-of-the-art methods for active causal discovery. They demonstrate their method in multiple settings and compare it alongside the aforementioned methods, to show superior performance almost across the board.
Strengths: Please see the Questions section for the main review.
Weaknesses: Please see the Questions section for the main review.
- The paper is very hard to follow for a number of reasons. First, the language is lacking in a lot of places and complex sentence construction makes following the arguments presented, somewhat harder still (this sentence for example on line 256: "Higher-fidelity models are more accuracy but cost much" should presumably be 'higher-fidelity models are more accurate but they cost more'). To help the authors I recommend Grammarly which is a great free tool that can pickup many of the language mistakes that are present in the manuscript. Second, the authors jump a lot in their narrative thus making quite difficult to understand the point they are trying to make. The manuscript would do well to be proof-read a few more times by different people. Most all the manuscript lacks a red thread, taking the reader through the paper one coherent argument at time, thus building a narrative as it goes along. At the end of section three, I still do not quite understand where the authors are heading with their method.
Technical Quality: 3 good
Clarity: 1 poor
Questions for Authors: ## Abstract
- A very good abstract which I enjoyed reading but it is too long, much too long. The abstract is merely meant to summarise what you are doing in the paper, and pick out some main results and then leave the rest for the paper. I suggest about half the length of what you currently have.
## Introduction
- Typo: Markov equivalence class (not ‘equivalent’) - line 26.
- The paragraph starting on line 37 is excellent, very interesting. I would be interested to know if the authors have more information on this particular topic discussed in the paragraph i.e. do clinicians actually take these considerations intom account? References to that effect would be very informative.
- Do you know if there are more sources on multi-objective (multi-target), line 54, studies which you mention? This is an open problem for much of causal inference as you rightly know. It would be effectual to add such sources if you have them.
## Preliminary
- Typo / language: the section tile should be ‘Preliminaries’ as it is plural as you are describing multiple ideas here.
- Reference Pearl when you first introduce the SCM on line 78. It was his idea after all and he deserves credit whenever it is raised in academic context.
- You will need to give more information on the SCM. It is a strict definition which consists of a four-tuple. See Def 7.1.1 of Pearl’s book on causality. At present you are not giving the correct definition as there are pieces missing w.r.t. your description of the SCM.
- Your language for describing the adjacency matrix is a bit confusing. If i and j are denoted by a 1 in the matrix that means that there is an arc from i to j in the causal diagram or that there is an edge which exists i and points to j. Phrase it in this relation instead of saying a ‘causal relation’ as you have done, as you are fundamentally describing a DAG.
- Line 102: you cannot say ‘using causal language’ to define the do-operator because the do-operator is not unique for modelling interventions, there are many other frameworks such as the potential outcome framework. Please revise and provide adequate references.
- You have not discussed identifiability in section 2.2 which seems somewhat important because at present you are assuming that all your interventional distributions can be calculated from the observational distribution which is not always the case.
## The license model
- Explain what $H$ is in equation 2.
- Figure 1 does not make sense. What does the dashed arrow represent? Why does it point to another arrow? You are going to have to redo that figure with much more detail and clarification because this reviewer cannot make heads or tails of it. It looks like a first-order Markov model but I am sure that that has nothing to do with the problem you are studying but equally I do not quite understand what you are trying to show with that figure.
## Experiments
- Line 276: you cannot say “state-of-arts”. That phrasing is used as so: ‘the state-of-the-art model’ - it is an adjective that refers to a noun, in this case ‘model’.
- This section could be improved by adding the experimental topologies that are being targeted by the different causal discovery methods, to give the reader an idea of the complexity involved.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 1 poor
Contribution: 3 good
Limitations: Please see the Questions section for the main review. I would be happy to increase my score but my concerns above would have to adequately addressed in order to do so. This is not nitpicking (which it surely will be construed as from the authors) but the paper is genuinely very difficult to follow for the reasons listed but could be much improved with some simple proof-reading and revisions.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: For reviewer ug4Y:
Thanks so much for your detailed comments. We will try our best to alleviate your concerns in the following.
> Reviews: **Abstract**
>
> A very good abstract which I enjoyed reading but it is too long, much too long. The abstract is merely meant to summarise what you are doing in the paper, and pick out some main results and then leave the rest for the paper. I suggest about half the length of what you currently have.
Thanks for the comments. According to your comments, we plan to change our abstract by removing the discussions on the differences between single- and multi-fidelity oracles. In addition, we will also remove the challenges that we may face for multi-fidelity ACD. We believe the removed contents can be cleared delivered in the introduction.
> Concerns on the **Introduction**
1. For the typo "Markov equivalence class": we will correct it in the final version.
2. For the topic discussed in the paragraph starting on line 37: we believe this example is very practical, and there are indeed many studies on simulation based drug-disease relation discovery. For example, [1-9] are all about this topic. We will add these references into the final paper.
3. For the termination of "multi-objective (multi-target)": there are indeed many papers on multi-target intervention, for example, [10-12]. We will add them into the final version.
> Concerns on the **Preliminary**
1. For the typo and language problems: we will revise them carefully according to your suggestions.
2. For the reference of Pearl: we are sorry for this point. In the final version, we will definitely cite Pearl's paper.
3. For the definition of SCM: in the final version, we will present the correct definition according to Def 7.1.1 of Pearl’s book on causality.
4. For the adjacency matrix: thanks so much for your correction, in the final version, we will change "causal relation from $X_i$ to $X_j$" to "if $E_{ij} =1$, then there is an edge which exists from $i$ to $j$".
5. For the do-operator: we will remove the term "using causal language", and add Pearl's papers as references for the do-operator.
6. For the identifiability problem, we will add more discussions on the assumptions usually leveraged to derive identifiability.
> Concerns on the **The license model**
1. For $H$: it is the notation of entropy, that is, $H(x) = E_x[log x]$.
2. For Figure 1: the dashed arrow means that $x$ is determined by $\phi$ and $(j,v)$ jointly. It should point to $x$. In the additional submitted one-page pdf, we have replotted this figure according to your comments in **Figure 1**.
> Concerns on the **Experiments**
Thanks for these comments.
1. For the term "state-of-arts": we will revise it in the final version.
2. For the topologies: in the submitted one-page pdf, we have added the structures of the real causal graph in **Figure 4**.
**References**
[1] Li, J., & Lu, Z. (2013). Pathway-based drug repositioning using causal inference. BMC bioinformatics, 14(16), 1-10.
[2] Yang, J., Li, Z., Fan, X., & Cheng, Y. (2014). Drug–disease association and drug-repositioning predictions in complex diseases using causal inference–probabilistic matrix factorization. Journal of chemical information and modeling, 54(9), 2562-2569.
[3] Peyvandipour, A., Saberian, N., Shafi, A., Donato, M., & Draghici, S. (2018). A novel computational approach for drug repurposing using systems biology. Bioinformatics, 34(16), 2817-2825.
[4] Kelly, J., Berzuini, C., Keavney, B., Tomaszewski, M., & Guo, H. (2022). A review of causal discovery methods for molecular network analysis. Molecular Genetics & Genomic Medicine, 10(10), e2055.
[5] Domingo-Fernández, D., Gadiya, Y., Patel, A., Mubeen, S., Rivas-Barragan, D., Diana, C. W., ... & Colluru, V. (2022). Causal reasoning over knowledge graphs leveraging drug-perturbed and disease-specific transcriptomic signatures for drug discovery. PLoS computational biology, 18(2), e1009909.
[6] Chen, H., Zhang, Z., & Peng, W. (2017). miRDDCR: a miRNA-based method to comprehensively infer drug-disease causal relationships. Scientific reports, 7(1), 15921.
[7] Subpaiboonkit, S., Li, X., Zhao, X., Scells, H., & Zuccon, G. (2019, November). Causality discovery with domain knowledge for drug-drug interactions discovery. In International Conference on Advanced Data Mining and Applications (pp. 632-647). Cham: Springer International Publishing.
[8] Subpaiboonkit, S., Li, X., Zhao, X., & Zuccon, G. (2022, November). Causality Discovery Based on Combined Causes and Multiple Causes in Drug-Drug Interaction. In International Conference on Advanced Data Mining and Applications (pp. 53-66). Cham: Springer Nature Switzerland.
[9] Tian, X. Y., & Liu, L. (2012). Drug discovery enters a new era with multi-target intervention strategy. *Chinese journal of integrative medicine*, *18*(7), 539-542.
[10] Tigas, P., Annadani, Y., Ivanova, D. R., Jesson, A., Gal, Y., Foster, A., & Bauer, S. (2023, May). Differentiable Multi-Target Causal Bayesian Experimental Design. In *ICLR 2023-Machine Learning for Drug Discovery workshop*.
[11] Guerrero, L. R., Ho, J., Christie, C., Harwood, E., Pfund, C., Seeman, T., ... & Wallace, S. P. (2017, December). Using collaborative approaches with a multi-method, multi-site, multi-target intervention: evaluating the National Research Mentoring Network. In *BMC proceedings* (Vol. 11, No. 12, pp. 193-200). BioMed Central.
[12] Somvanshi, R. K., Zou, S., Kadhim, S., Padania, S., Hsu, E., & Kumar, U. (2022). Cannabinol modulates neuroprotection and intraocular pressure: A potential multi-target therapeutic intervention for glaucoma. *Biochimica et Biophysica Acta (BBA)-Molecular Basis of Disease*, *1868*(3), 166325.
---
Rebuttal 2:
Title: Rebuttal by Authors
Comment: Dear reviewer ug4Y,
Thanks again for your comments, which, we believe, are very important to improve our paper.
In the rebuttal, we try our best to answer your questions one by one. We will definitely improve our writing to make it more comfortable for a wider audience. We believe most of the comments can be easily addressed in the final version.
If you have further questions, we are very happy to discuss more about them.
---
Rebuttal 3:
Comment: Thanks again for a very interesting paper and the considerable effort that you have put in, as well for taking the time to respond to my comments. I am writing to confirm that I've read the author's rebuttal.
---
Rebuttal Comment 3.1:
Title: Thanks for the response
Comment: Dear reviewer ug4Y, thanks very much for your kind reply. We believe most of your concerns are about presentation. If our responses have alleviated your concerns, is it possible to consider adjusting your score? We really believe these concerns do not influence the major contribution of this paper, which can be quickly addressed in the final version. | Summary: This paper presents a method for active causal discovery with "multi-fidelity" oracles.
Here multi-fidelity refers to the option to request outcome labels for a given experiment from a set of oracles with different quality levels.
The method extends causal experimental design methods where experiments are defined by a given variable and value to this (to my knowledge) novel problem setting.
As such, their method views an experiment as consisting of a triple: (variable, value, fidelity) and defines a mutual information objective for experiment selection.
The method also includes a "cascading fidelity model" accounting for information shared between fidelity models.
They propose an "$\epsilon-$submodular" method for multiple-intervention experiments.
They empirically validate their results.
Strengths: The paper makes novel non-obvious contributions, which I have summarized above. To my knowledge this is the first exploration of multi-fidelity active causal discovery, and they have provided a clever and principled solution. I cannot speak to the correctness of the improved greedy method, but the rest of the paper does not seem to have major flaws.
Weaknesses: ## Minor Concerns
I really only have minor concerns, which I hope will help to improve an already excellent paper.
1. **Writing Quality**. As an example, "which is fundamental for many real-world applications, ranging from health caring, education, to drug discovery, and protein synthesis," is a little awkward and could be improved as, "which is fundamental for many real-world applications, including health care, education, drug discovery, and protein synthesis." Another example would be that "In specific" can be replaced by "Specifically" or "In particular." Another would be starting a sentence with a citation should use the \citet{} function rather than \cite{} or \citep{}, such as on line 34. There are other examples and I would suggest using a tool like Grammarly. I do not find this to be a deal breaker, though, and some sections are written very well (e.g., 2.1).
1. **Motivating Example**. The example of a high-fidelity experiment as an actual patient outcome vs. a low-fidelity experiment as a simulated patient outcome is a little weak. At least need to explain why the simulator could be lower-fidelity. If the simulator has domain knowledge, why not incorporate that knowledge into the causal model rather than try to infer that knowledge? Is it a black box? As alternatives, maybe consider spacial or temporally measured outcomes as examples of low vs. high-fidelity. For example, perhaps a lower spatial or temporal resolution measurement vs. a higher spatial or temporal measurement? A satellite image at 10-mile resolution vs. 1-mile resolution. An MRI from a 1T machine vs. a 7T machine? Maybe alternatively, a radiologist's assessment of malignancy vs. a biopsy result?
1. **line 91**. covariance -> variance
1. **f**. You use $f$ to define the SCM functions and the acquisistion function, which is a little confusing.
1. **Active Causal Discovery**. Since you work on identifying the adjacency matrix and the SEM parameters, I'd use causal experimental design. Only because causal discovery has focused more on just identifying the adjacency matrix. But, active causal discovery does have a nice ring to it.
1. **Duplicated references**. There are references with multiple but distinct entries.
1. **Licence** is a bit arbitrary as a name.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: What issues would arise if the cost $\lambda$ and the mutual information $I$ are not on similar scales?
Can you know the scale of the mutual information? Is your mutual information estimator reliable?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: For reviewer yEiY:
Thanks for your detailed comments.
> Reviews: As an example, "which is fundamental for many real-world applications, ranging from health caring, education, to drug discovery, and protein synthesis," is a little awkward and could be improved as, "which is fundamental for many real-world applications, including health care, education, drug discovery, and protein synthesis." Another example would be that "In specific" can be replaced by "Specifically" or "In particular." Another would be starting a sentence with a citation should use the \citet{} function rather than \cite{} or \citep{}, such as on line 34. There are other examples and I would suggest using a tool like Grammarly. I do not find this to be a deal breaker, though, and some sections are written very well (e.g., 2.1).
Thanks for pointing out the writing problems, In the final version, we will carefully revise them according to your comments.
> Reviews: Motivating Example. The example of a high-fidelity experiment as an actual patient outcome vs. a low-fidelity experiment as a simulated patient outcome is a little weak. At least need to explain why the simulator could be lower-fidelity. If the simulator has domain knowledge, why not incorporate that knowledge into the causal model rather than try to infer that knowledge? Is it a black box? As alternatives, maybe consider spacial or temporally measured outcomes as examples of low vs. high-fidelity. For example, perhaps a lower spatial or temporal resolution measurement vs. a higher spatial or temporal measurement? A satellite image at 10-mile resolution vs. 1-mile resolution. An MRI from a 1T machine vs. a 7T machine? Maybe alternatively, a radiologist's assessment of malignancy vs. a biopsy result?
Thanks for this comments. In the patient example, we believe the simulator can be less accurate, since the models used for simulation can be not perfect, it may contain different approximation errors. However, the simulation method may cost little, since we do not have to make experiments on the real people.
We believe the motivating example of spacial or temporally measured outcomes is also very interesting, we will definitely add it into our final paper.
> Reviews: line 91. covariance -> variance. You use f to define the SCM functions and the acquisistion function, which is a little confusing.
In the final version, we will revise these inappropriate aspects accordingly.
> Reviews: Active Causal Discovery. Since you work on identifying the adjacency matrix and the SEM parameters, I'd use causal experimental design. Only because causal discovery has focused more on just identifying the adjacency matrix. But, active causal discovery does have a nice ring to it. Duplicated references. There are references with multiple but distinct entries. Licence is a bit arbitrary as a name.
Thanks for this comment. We will change ACD to causal experimental design in the final version. For the references, will double check them to make the paper more clear. For the model name Licence, we will change it in the final version.
> Reviews: What issues would arise if the cost and the mutual information are not on similar scales?
For the same oracle, we just need to obtain the ranking of the mutual information for different intervention variable and value pairs. The absolute value may be not that important.
Actually, in our experiments, we find that the mutual information term is always in [0.05, 0.2], which does not differ too much from the cost.
> Reviews: Can you know the scale of the mutual information? Is your mutual information estimator reliable?
The scale of the mutual information term is always in [0.05, 0.2]. We can not ensure that the estimator is completely correct. The reliability of the estimator comes from the Law of Large Numbers.
---
Rebuttal Comment 1.1:
Title: Thank you for your reply.
Comment: Thank you for your reply and taking the time to review my comments. Again, I think this is a sufficiently novel (and correct) contribution for acceptance at NeurIPS and I will maintain my score.
Regarding the last 2 points, I do think that because the mutual information estimate will generally depend on the estimator used, whereas the cost is something fixed in each setting, a discussion of the potential issues that this may lead to should be included in the limitations section of the paper.
Good luck.
---
Reply to Comment 1.1.1:
Title: Thanks for the response
Comment: Dear Reviewer yEiY,
Thanks very much for your feedback, we will definitely incorporate the discussion on the potential issues of the points you mentioned in the final version.
Thanks
---
Rebuttal 2:
Title: Rebuttal by Authors
Comment: Dear reviewer yEiY, thanks again for your insightful comments, which, we believe, are very important to improve our paper. In the rebuttal and submitted one-page pdf, we have tried to answer your questions one by one. If you have further questions, we are very happy to discuss them. | Summary: In this manuscript, the authors propose a novel method for active causal discovery (ACD) in a multi-fidelity setting, where experiments with different cost and accuracy can be designed and performed for network intervention for the purpose of accurately learning the causal structure.
For this multi-fidelity ACD (MFACD), the manuscript proposes License - a Bayesian framework for "Multi-fidelity
active learning for causal discovery".
License adopts an information-theoretic acquisition function motivated by the popular Bayesian Active Learning by Disagreement (BALD) for the prediction of the best experiment, which consists of selecting the node for intervention, its value, and the fidelity.
Furthermore, a cascaded fidelity model is proposed to capture the correlations between the experimental outcomes across different fidelities.
Strengths: The problem of ACD is widely studied in a number of fields, especially so in life science to uncover the regulatory relations among genes.
Network interventions are routinely performed in labs to unveil the causal structure of the network, hence designing experiments that can accurately uncover the causal relations among nodes and minimizing the overall cost of designing and carrying out the experiments is an important problem.
Multi-fidelity experiments may often be considered in practical settings to maximize the value of information acquired by the experiments based on a given experimental budget, and the MFACD problem tackled in this manuscript is therefore of practical importance as well as theoretical interest.
The proposed acquisition function for predicting the best experiment is reasonable, as its acquisition function is motivated by the widely popular active learning scheme - BALD - which is normalized by the experimental cost to assess the cost-normalized value of information that may be attained by a given experiment.
The evaluation results in the manuscript shows that the proposed License method may have potential advantages over some alternative schemes.
Weaknesses: While the problem being tackled in this current study is very interesting and is both of practical importance as well as of theoretical significance, there are several major concerns regarding the current manuscript as summarized below.
Although the concept of designing multi-fidelity experiments to maximize the accuracy of ACD while minimizing the overall experimental cost is important, the type of multi-fidelity experiments considered in this work is not well motivated.
For example, the authors assume that there are multiple SCMs with different fidelities.
However, it is unclear how the accuracy of the SCM at a specific fidelity is defined and what parameters govern this accuracy.
And in practice, how does one construct or obtain multiple SCMs at different fidelities?
Where does the different experimental cost arise, if all SCMs are of the same form with different fidelities?
Since the manuscript doesn't give any practical example of how such multi-fidelity models may be constructed or accessed and how (and why) they may differ in terms of the acquisition cost (of the experimental outcomes), it is unclear whether the proposed method and the derived results may be applicable in any practical setting.
For example, in drug discovery, it is typical to assess the efficacy of a drug candidate against a specific target using multi-fidelity models, starting from a fast ML surrogate model, and then to a high-throughput docking model, then use molecular dynamics simulations to assess the binding affinity between the drug candidate and the target.
Naturally, there are differences in terms of the computational cost as well as the accuracy, hence decisions need to be made as to which model should be incurred to maximize the value of information on the computational investment made.
Unfortunately, the current manuscript does not provide any real example to motivate the proposed method and its problem setting, making it difficult to see how the proposed approach might be applied.
Furthermore, it is natural that multi-fidelity experimental outcomes will be correlated to each other (at least to a certain extent) and it is a good idea to explore this property when learning/constructing the multi-fidelity models and utilizing them in MFACD experimental design.
However, in the current study, it is unclear how leveraging such correlations across different fidelities actually translates to the efficacy of the designed experiments and/or savings in terms of the experimental cost.
The description of "extension to multi-target intervention" is also somewhat confusing.
Based on the technical descriptions in 3.3. and the equations (7)-(9), it appears that while the experiments (or interventions) are designed in sequence (hence referred to as "a series of experiments"), the interventions are being applied to the network "simultaneously".
However, as the actual "multi-target intervention" setting being investigated in the current study is not clearly described (or described in a potentially misleading manner), there is some ambiguity in the experimental setting which needs to be clarified.
Theorem 3 states that "For any two experiments es and et, if the corresponding samples xs and xt are
ε-independent given φM, {es, et} and D, then I(·; φM |·, D) is ε-submodular."
However, how can one actually guarantee whether a given pair of samples xs and xt are indeed ε-independent, considering the unknown and complex causal structure underlying these samples?
This needs to be elaborated more clearly.
Finally, the experimental results shown in the current study are very limited, and far from convincing to demonstrate the general applicability of the proposed License method to general MFACD problems.
Furthermore, the multi-fidelity SCMs used in these evaluations, the number of fidelities considered, and the respective experimental/intervention costs are not clearly described, which makes it difficult to understand how the overall evaluations have been carried out.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Please see the questions and concerns in the above section.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The broader impacts of the current study and its limitations are not discussed in the current manuscript.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: For reviewer am1k:
> Concerns on the multiple SCMs.
1. For the real-world scenarios, multi-fidelity oracles are very common, for example (as mentioned in the paper), to investigate the drug-disease causal relations, one can either conduct clinical tests (high cost but more accurate) or build simulators to obtain the medicine effects on the patients.
2. In our study, it is very hard to find public datasets, which can be used for causal discovery and simultaneously contain multi-fidelity oracles. Thus, we firstly find the commonly used causal discovery datasets. And then regard the ground truths in these datasets as the highest-fidelity oracle, and simulate multiple low-fidelity oracles by adding Gaussian noises on the ground truths.
3. The specific formulation of the above experiment settings are presented as follows: for a given intervention $(j,v)$, suppose we have $M$ oracles $\\{\phi_{1},\phi_{2},...,\phi_{M}\\}$, then the experiment results $\\{x_{j,v,1},x_{j,v,2},...,x_{j,v,M}\\}$ are specified as follows:
>$$x_{j,v,m} = x_{j,v,M} + \delta_m,$$
>$$\delta_m \sim N(0,\sigma_m),$$
where $x_{j,v,M}$ is the ground truth, which can be directly obtained from the datasets. Since $x_{j,v,m}$ is correlated with $x_{j,v,M}$ by the first line, their underlying oracles $\phi_{m}$ and $\phi_{M}$ are correlated in our simulation. In our experiment, we set $\sigma_1 > \sigma_2 > ... >\sigma_M = 0$. Suppose the cost of $\phi_{m}$ as $\lambda_m$, then we set $\lambda_1 < \lambda_2 <...<\lambda_M$.
4. To demonstrate that our model is generally effective for different cost- and noisy-levels. We conduct experiments based on different sets of oracles.
In specific, the experiments are conducted based on the settings presented in Table 1 of the submitted one-page pdf. The results are presented in Figure 3 of the submitted one-page pdf. From the results, we can see that our model can always perform better than the baselines on different sets of oracles.
5. The above experiment settings to simulate the oracles with different fidelities are very common in the field of multi-fidelity optimization [1-3]. We exactly follow the common practice in the multi-fidelity domain. We believe this is understandable, since it is hard to find public datasets with multi-fidelity oracles, but this research direction is meaningful and need to be improved.
> Concerns on the effectiveness of modeling oracle correlations.
To study whether the correlation modeling between different oracles are necessary, we first build a variant of our model by regarding different oracles as independent components, that is, removing the links between different $\phi$'s in Figure 1, and then compare our model with such variant. The results are presented in Table 3 of the submitted pdf. We can see, in most cases, our model can achieve better performance than its variant without modeling the correlations between different oracles.
> Concerns on the description of "extension to multi-target intervention".
1. Multi-target intervention aims to (1) firstly determine a set of intervention variables, and then (2) leverage them to simultaneously query the oracles to obtain the experiments results.
2. For (1), we leverage equation (7) and (8) to determine the variables one by one within a budget $C$. This step only aims to determine a batch of variables. We still do not leverage them to conduct experiments. We use the greedy method (*i.e.,* determine the variables in sequence), because it has solid theoretical grantees and has been widely used before.
3. For (2), once the batch of variables has been determined, we use them to simultaneously intervene the oracles and obtain the experiment results.
> Concerns on Theorem 3.
1. In the previous single-fidelity studies, the theoretical foundation behind the greedy method is "T1: if $x_s$ and $x_t$ are independent given $\Phi_M$, then the greedy method can be bounded." (see [4-6]).
2. In our multi-fidelity study, we find that, "$x_s$ and $x_t$ are actually not independent given $\phi_M$", which means that the theoretical foundation of the greedy method fails.
3. To extend T1 to multi-fidelity settings, we introduce the concepts of $\epsilon$-independent and $\epsilon$-submodular, and propose Theorem 3.
4. $\epsilon$-independent is a concept describing the data independent character. When $\epsilon \rightarrow 0$, $\epsilon$-independent equals to independent. When $\epsilon \rightarrow \infty$, xs and xt can be highly correlated. For any dataset, there should be always an $\epsilon$ which ensures that "$x_s$ and $x_t$ $\epsilon$-independent" (although $\epsilon$ can be a very large value).
5. Actually, theorem 3 is a general version of Theorem B.2 in [4]. When $\epsilon = 0$, our theorem reduces to Theorem B.2 in [4] as well.
6. The constraint in equation (10) aims to introduce inductive bias on the independent characters of the real data.
> Concerns on the experiments.
To improve our experiments, we conduct a large amount of further experiments on the additional evaluation metrics, different oracle settings, influence of the regularization coefficient $\lambda$ under line 172 in the appendix and ablation studies (see the submitted pdf).
> The broader impacts of the current study and its limitations are not discussed in the current manuscript.
Actually, we have discussed the impacts of the current study in the Appendix. For the limitations, we will add them in the final version.
**References**
[1] Batch Multi-Fidelity Bayesian Optimization with Deep Auto-Regressive Networks.
[2] Batch Multi-Fidelity Active Learning with Budget Constraints.
[3] Deep multi-fidelity active learning of high-dimensional outputs.
[4] Interventions, where and how? experimental design for causal models at scale.
[5] Batchbald: Efficient and diverse batch acquisition for deep bayesian active learning.
[6] Abcd-strategy: Budgeted experimental design for targeted causal structure discovery.
---
Rebuttal Comment 1.1:
Comment: I would like to thank the authors for their careful and thorough responses to my review comments.
The point-by-point response above has addressed many of the previous concerns raised in my review.
I understand that there exist practical limitations on how to evaluate and demonstrate the proposed active causal discovery scheme under a multi-fidelity setting and that the authors have decided to perform the experiments based on benchmark data commonly used for evaluating causal discovery techniques and also based on simulated data under reasonable modeling assumptions.
However, it is also important to recognize their limitations (as mentioned in my original review comments) and discuss them in the manuscript.
To a certain extent, this may be done by incorporating the arguments/explanations in the above rebuttal into the revised manuscript and appendix.
Additionally, I still feel that the "practical" motivation for the proposed method and the current study could be further strengthened, as the problem and the work themselves are theoretically interesting but as the current evaluation setting does not appear to be strongly connected to "real" use cases.
Finally, the additional experiments the authors have performed for the rebuttal add significant value to the current study, as they address a number of concerns raised in my original review and also highlight the merits of the proposed method more clearly.
I hope these results will be integrated into the main text as well as the appendix of this work.
Overall, I would be happy to raise my evaluation score thanks to the clarifications and additional experimental results provided by the authors.
Thank you again.
---
Reply to Comment 1.1.1:
Title: Thanks for the response
Comment: Dear reviewer am1k,
Thanks so much for your feedback. We will definitely incorporate the arguments/explanations about the experiment setup into our final version. In addition, we will also present more strong motivations and the added experiments in the final paper.
Thanks again.
---
Rebuttal 2:
Title: Rebuttal by Authors
Comment: Dear reviewer am1k, thanks again for your detailed comments, which, we believe, are very important to improve our paper.
In the rebuttal and submitted one-page pdf, we try to clarify the experiment settings, multi-target intervention, and Theorem 3 in detail.
To alleviate your concerns, we also conduct a large number of experiments, including verifying whether the correlation of oracle modeling is important, experiments on different oracle settings, the influence of the key hyper-parameters, and more evaluation metrics.
If you have further questions, we are very happy to discuss them. We really hope our efforts can alleviate your concerns. | Summary: This paper proposes an approach for Bayesian active causal discovery with multi-fidelity observations. This approach has two main components: (1) a cascade probabilistic model handling the correlation between fidelity levels, and (2) a cost-aware information-theoretic acquisition function, which quantifies the mutual information (per unit of cost) between the causal graph and an observation at a given input location and fidelity level. An extension to the multi-target setting, where multiple nodes can be intervened simultaneously, is considered. In such a setting, a seemingly natural choice is to select the nodes to be intervened in a greedy fashion by iteratively maximizing the mutual information. However, submodularity does not hold, so the classical approximation guarantee is not obtained. To alleviate this issue, the notions of $\epsilon$-independence and $\epsilon$-submodularity. The high-level idea is to select points with low mutual information so that submodularity holds approximately. The proposed approach is shown to significantly outperform various baselines across three test problems.
Strengths: 1. The problem considered by this paper is of significant practical relevance.
2. The proposed approach is technically sound.
3. This paper is very well written overall.
Weaknesses: My only significant concern about this paper is its empirical evaluation, as there are several details I could not figure out. See questions 1 and 2 below.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: 1. How is $\epsilon$ chosen? More generally, how were all the algorithm hyperparameters chosen?
2. I believe some of the datasets used are, in principle, not multi-fidelity. How was this addressed?
3. It would be helpful to include figures showing the causal graph structure of the test problems considered.
4. In the context of causal optimization, the term "target" is often used to denote the node to be optimized. Thus, I believe the term "multi-target" is misleading in its use here. Perhaps "batch" would be more appropriate.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 2 fair
Limitations: The potential negative societal impact was adequately discussed (in the supplement). The authors also mention some interesting ways to improve their method. However, I believe two important limitations were not addressed:
1. Computational cost of the proposed approach vs. the standard approach.
2. Robustness to the choice of the algorithm hyperparameters.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: For reviewer HBUA:
Thanks for your comments. In the following, we try to alleviate your concern one by one:
> Question 1: How is episilon chosen? More generally, how were all the algorithm hyper parameters chosen?
As can be seen in Appendix D, when training our model, the constraint involving $\epsilon$ is converted to a regularization to the objective. Actually, $\epsilon$ play similar roles as the regularization coefficient, that is, $\lambda$ in the equation below line 184. For $\lambda$ as well as all the other hyper-parameters, we determine them by grid search based on the validation set.
> Question 2: I believe some of the datasets used are, in principle, not multi-fidelity. How was this addressed?
1. In our experiments, we regard the original ground truth as the highest fidelity oracle. Then we add Gaussian noises on the highest fidelity oracle to simulate the other oracles with different fidelities. We add more uncertain noises on the lower fidelity oracles to make it less accurate. At last, we manually set the costs of the oracles with different fidelities to ensure that "higher fidelity oracles cost more than the lower ones".
2. The specific formulations of the above experiment settings are presented as follows:
In our experiments, we follow the common practice to simulate the oracles with different fidelities as follows:
For a given intervention $(j,v)$, suppose we have $M$ oracles $\\{\phi_{1},\phi_{2},...,\phi_{M}\\}$, then the experiment results $\\{x_{j,v,1},x_{j,v,2},...,x_{j,v,M}\\}$ are specified as follows:
$$
x_{j,v,m} = x_{j,v,M} + \delta_m,
$$
$$\delta_m \sim N(0,\sigma_m),$$
where $x_{j,v,M}$ is the ground truth, which can be directly obtained from the datasets. Since $x_{j,v,m}$ is correlated with $x_{j,v,M}$ by the first line, their underlying oracles $\phi_{m}$ and $\phi_{M}$ are correlated in our simulation. In our experiment, we set $\sigma_1 > \sigma_2 > ... >\sigma_M = 0$. Suppose the cost of $\phi_{m}$ as $\lambda_m$, then we set $\lambda_1 < \lambda_2 <...<\lambda_M$.
3. To demonstrate that our model is generally effective for different cost- and noisy-levels. We conduct experiments based on different sets of oracles.
In specific, the experiments are conducted based on the settings presented in Table 1 of the submitted one-page pdf. The results are presented in Figure 3 of the submitted one-page pdf. From the results, we can see that our model can always perform better than the baselines on different sets of oracles.
4. The above experiment settings to simulate the oracles with different fidelities are very common in the field of multi-fidelity optimization [1-3].
We will definitely add the above experiment settings in the final paper.
> Question 3: It would be helpful to include figures showing the causal graph structure of the test problems considered.
Following your advice, we have added many examples of the test causal graph structures in Figure 4 of the submitted one-page pdf.
> Question 4: In the context of causal optimization, the term "target" is often used to denote the node to be optimized. Thus, I believe the term "multi-target" is misleading in its use here. Perhaps "batch" would be more appropriate.
Thank you for your comments. In the final version, we will revise these inappropriate terminations.
> Limitation 1: Computational cost of the proposed approach vs. the standard approach should be addressed.
Actually, we have already compared the computation costs between our model and the baselines in Appendix H.2.
> Limitation 2: Robustness to the choice of the algorithm hyper-parameters.
We have conducted many experiments on the influence of the hyper-parameters, for example, the regularization coefficient $\lambda$ (see Figure 2 of the submitted one-page pdf), the oracle cost- and noisy-levels (see Figure 3 of the submitted one-page pdf), the DAG regularization coefficient $\beta$ (see Figure 3(b) in the main paper). From the results, we find that the model performance is robust to some parameters like $\lambda$, but may be sensitive to the other hyper-parameters like $\beta$.
**References**
[1] Li, S., Kirby, R., & Zhe, S. (2021). Batch Multi-Fidelity Bayesian Optimization with Deep Auto-Regressive Networks. *Advances in Neural Information Processing Systems*, *34*, 25463-25475.
[2] Li, S., Phillips, J. M., Yu, X., Kirby, R., & Zhe, S. (2022). Batch Multi-Fidelity Active Learning with Budget Constraints. *Advances in Neural Information Processing Systems*, *35*, 995-1007.
[3] Li, S., Kirby, R. M., & Zhe, S. (2020). Deep multi-fidelity active learning of high-dimensional outputs. *arXiv preprint arXiv:2012.00901*.
---
Rebuttal Comment 1.1:
Title: Post-rebuttal follow-up by Reviewer HBUA
Comment: Dear authors,
Thank you for your thorough response. Several of my concerns have been adequately addressed. However, after learning that multi-fidelity observations were obtained by adding different levels of Gaussian noise, I am not convinced that the current empirical evaluation is representative of real-world problems, where observations at different fidelity levels are typically biased in very complex ways. I consider this a major concern, so I have decided to lower my score to 4.
Best wishes,
Reviewer HBUA
---
Reply to Comment 1.1.1:
Comment: Thanks for your response:
(1) We admit that simulation based studies can not perfectly represent the real-world settings. However, in the field of multi-fidelity domain, simulation is a quite common strategy [1-6], since it is hard to find public available datasets.
(2) We leverage Gaussian noises to simulate the low-fidelity oracles, since they widely exist in real-world scenarios.
(3) We have conducted multiple different oracle settings (see the submitted one-page pdf) to demonstrate that the improvement of our model is not at random.
(4) To alleviate your concern, we further conduct the experiment by building the oracles based on another type of method. In specific, we use two neural network (say A and B) to firstly learn the ground truth separately, and then use the learned models as the lower-fidelity oracles. The numbers of parameters of A and B are NA and NB (NA > NB), respectively. The accuracy of A is higher than B, and therefore, we regard A as a higher fidelity oracle. We regard the normalized time cost for inferring the experiment results as the oracle cost. Since NA > NB, the time cost for A is large than B.
- For the experiments of ER Graph
More details of the neural networks:
| Fidelity | Number of Parameters | Accuracy (oracle fidelity-level) | Time Cost (oracle cost) |
| -------- | -------------------- | -------------------------------- | ----------------------- |
| A (High) | 420 | 93.22% | 29.68 ms |
| B (Low) | 210 | 91.37% | 18.46 ms |
Experiment results:
| Mode | SHD ↓ | AUPRC (%) ↑ | MSE (%) ↓ |
| ----------- | --------- | ----------- | --------- |
| AIT-REAL | 24.25 | 16.50 | 4.49 |
| AIT-RANDOM | 25.75 | 17.97 | 4.21 |
| CBED-REAL | 27.75 | 20.14 | 5.64 |
| CBED-RANDOM | 22.75 | 15.98 | 3.42 |
| Licence | **14.75** | **30.41** | **2.12** |
- For the experiments of SF Graph
More details of the neural networks:
| Fidelity | Number of Parameters | Accuracy (oracle fidelity-level) | Time Cost (oracle cost) |
| -------- | -------------------- | -------------------------------- | ----------------------- |
| A (High) | 420 | 93.20% | 44.53 ms |
| B (Low) | 210 | 91.52% | 30.47 ms |
Experiment results:
| Mode | SHD ↓ | AUPRC (%) ↑ | MSE (%) ↓ |
| ----------- | --------- | ----------- | --------- |
| AIT-REAL | 27.25 | 16.44 | 3.89 |
| AIT-RANDOM | 27.50 | 21.65 | 3.67 |
| CBED-REAL | 25.50 | 24.48 | 4.88 |
| CBED-RANDOM | 24.75 | 22.53 | 6.05 |
| Licence | **18.50** | **25.78** | **2.99** |
**References**
[1] Batch Multi-Fidelity Bayesian Optimization with Deep Auto-Regressive Networks.
[2] Batch Multi-Fidelity Active Learning with Budget Constraints.
[3] Deep multi-fidelity active learning of high-dimensional outputs.
[4] Interventions, where and how? experimental design for causal models at scale.
[5] Batchbald: Efficient and diverse batch acquisition for deep bayesian active learning.
[6] Abcd-strategy: Budgeted experimental design for targeted causal structure discovery.
---
Reply to Comment 1.1.2:
Title: Further rebuttal
Comment: Dear Reviewer HBUA:
We truly appreciate your feedback. Your comments and responses really have improved our paper a lot. Here, we would like to further clarify our work for your consideration.
To begin with, we would like to highlight our major contributions: (1) we believe this paper makes a first step towards multi-fidelity active causal discovery. (2) To solve this problem, we design a novel model for intervention conduction. (3) Our paper also has theoretical contributions. In specific, we find that the theories behind the traditional greedy method may not work, thus we develop many novel theories to extend the previous theories.
Indeed, we can not ensure that our simulation method completely follows the real-world settings, but this is a common practice in the multi-fidelity domain, and maybe this is the best effort we can try to solve this novel problem. As mentioned by reviewer Xcc7, simulation-based experiment settings could be understandable practical limitations. We will definitely incorporate the limitations of the simulation-based method into our final paper.
Despite the above aspects, to alleviate your concerns about the simulation settings as much as possible, we have made two significant efforts: (1) for the Gaussian-based simulation, we experiment on six different settings (see the submitted one-page pdf). By this operation, we would like to demonstrate that the improvement of our model is **not sensitive to some specific simulation settings**. (2) We add another type of method to simulate the lower fidelity oracles(see our rebuttal). By this operation, we would like to demonstrate that our model is **not sensitive to the simulation method**.
We really hope that our efforts can alleviate your concerns.
Yours faithfully, Authors
---
Rebuttal 2:
Title: Rebuttal by Authors
Comment: Dear reviewer HBUA, thanks again for your significant comments, which can definitely improve our paper. In the rebuttal and submitted one-page pdf, we try to explain your questions one by one. In addition, we have added a large number of experiments to make our paper more solid. If you have further questions, we are very happy to discuss them. | Rebuttal 1:
Rebuttal: Dear reviewers:
Thanks for your detailed reviews. Additional tables and figures that mentioned in rebuttals are shown in the submitted one-page pdf.
Pdf: /pdf/04c4e8921629fcaeb8abd7ebe20bc78469150d2c.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: This paper addresses the problem of active causal discovery with multi-fidelity oracles, where experiments can be done based on different costs, precisions, and reliabilities. The paper formally defines the task of multi-fidelity active causal discovery and proposes a Bayesian framework consisting of a mutual information-based acquisition function and a cascading fidelity model. The paper also extends the framework to the multi-target intervention scenario and introduces a constraint-based fidelity model to validate the greedy method. The effectiveness of the proposed model is demonstrated through extensive experiments.
Strengths: (1) The paper addresses an important and practical problem of active causal discovery with multi-fidelity oracles, which is more realistic than previous single-fidelity settings.
(2) The proposed Bayesian framework, including the mutual information-based acquisition function and the cascading fidelity model, provides a novel and practical solution to the multi-fidelity active causal discovery problem.
(3) The extension to the multi-target intervention scenario and the introduction of the constraint-based fidelity model further enhance the applicability of the proposed model.
Weaknesses: (1) The evaluation and ablation studies for the proposed method are limited, and more comprehensive experiments could be conducted to strengthen the empirical findings.
(2) Have you considered other evaluation metrics besides SHD, AUPRC, and MSE? How does the proposed model compare to other state-of-the-art methods in terms of these metrics?
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: Have you considered other evaluation metrics besides SHD, AUPRC, and MSE? How does the proposed model compare to other state-of-the-art methods in terms of these metrics?
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: No potential negative societal impact of their work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: For reviewer Xcc7:
Thanks for your overall positive comments on our paper. We try to alleviate your concerns as follows:
> Concerns: More comprehensive experiments could be conducted to strengthen the empirical findings. Have you considered other evaluation metrics besides SHD, AUPRC, and MSE? How does the proposed model compare to other state-of-the-art methods in terms of these metrics?
The reasons why we choose SHD, AUPRC, and MSE as our evaluation metrics:
(1) These metrics are widely used in previous studies in the field of active causal discovery, particularly for the datasets used in our experiments [1-3].
(2) These metrics provide a comprehensive evaluation of the model's performance from different perspectives. The first two metrics assess the accuracy of the learned topological structure, while the last one measures the performance of functional relations.
To alleviate your concerns, we further conduct the following experiments:
1. More experiments on additional evaluation metrics (see Table 2 of the submitted one-page pdf).
2. More experiments on the influence of different oracle settings (see Table 1 and Figure 3 of the submitted one-page pdf). For a given intervention $(j,v)$, suppose we have $M$ oracles $\\{\phi_{1},\phi_{2},...,\phi_{M}\\}$, then the experiment results $\\{x_{j,v,1},x_{j,v,2},...,x_{j,v,M}\\}$ are specified as follows:
> $$
> x_{j,v,m} = x_{j,v,M} + \delta_m ,
> $$
> $$\delta_m \sim N(0,\sigma_m) ,$$
where $x_{j,v,M}$ is the ground truth, which can be directly obtained from the datasets. Since $x_{j,v,m}$ is correlated with $x_{j,v,M}$ by the first line, their underlying oracles $\phi_{m}$ and $\phi_{M}$ are correlated in our simulation. In our experiment, we set $\sigma_1 > \sigma_2 > ... >\sigma_M = 0$. Suppose the cost of $\phi_{m}$ as $\lambda_m$, then we set $\lambda_1 < \lambda_2 <...<\lambda_M$.
To demonstrate that our model is generally effective for different cost- and noisy-levels. We conduct experiments based on different sets of oracles.
In specific, the experiments are conducted based on the settings presented in Table 1 of the submitted one-page pdf. The results are presented in Figure 3 of the submitted one-page pdf. From the results, we can see that our model can always perform better than the baselines on different sets of oracles.
3. More experiments on the regularization coefficient $\lambda$ under line 172 in the appendix (see Figure 2 of the submitted one-page pdf).
4. More experiments on ablation studies (see Table 3 of the submitted one-page pdf).
We sincerely thank you for your time to review our paper and give positive comments on it. We are glad to answer further questions, which, we believe, can definitely improve our paper.
**References**
[1] Tigas, P., Annadani, Y., Jesson, A., Schölkopf, B., Gal, Y., & Bauer, S. (2022). Interventions, where and how? experimental design for causal models at scale. *Advances in Neural Information Processing Systems*, *35*, 24130-24143.
[2] Scherrer, N., Bilaniuk, O., Annadani, Y., Goyal, A., Schwab, P., Schölkopf, B., ... & Ke, N. R. (2021). Learning neural causal models with active interventions. *arXiv preprint arXiv:2109.02429*.
[3] Zheng, X., Aragam, B., Ravikumar, P. K., & Xing, E. P. (2018). Dags with no tears: Continuous optimization for structure learning. *Advances in neural information processing systems*, *31*.
---
Rebuttal Comment 1.1:
Comment: I appreciate the authors responding to my concerns; specifically, I appreciate the authors including a large number of additional experiments. I think this is a significant contribution. I will maintain my score.
---
Reply to Comment 1.1.1:
Title: Rebuttal by Authors
Comment: Thanks very much for your response.
---
Rebuttal 2:
Title: Rebuttal by Authors
Comment: Dear reviewer Xcc7, thanks again for your important comments. In the rebuttal and submitted one-page pdf, we have followed your advice to conduct a large number of additional experiments. If you have further questions, we are happy to discuss them. | null | null | null | null | null | null |
A Fractional Graph Laplacian Approach to Oversmoothing | Accept (poster) | Summary: The authors propose two novel Fractional Graph Laplacian (FGL)-based neural Ordinary Differential Equations (ODEs): the fractional heat equation and the fractional Schrödinger equation. These solutions provide enhanced flexibility in the convergence of the Dirichlet energy and make the exponent of the fractional graph Laplacian a learnable parameter. This allows the network to adaptively decide the optimal exponent for a specific task and graph. The fractional graph Laplacian operator generalizes the Laplacian operator. The experimental results highlight the improvements of using fractional graph Laplacians, but the benefit is limited.
Strengths: 1. The proposed FGL Neural ODE can be $\lambda-FD$ which extends the Neural ODE-based GNNs that are limited to being either LFD or HFD.
2. This paper generalizes the concepts related oversmoothing from undirected graphs to directed graphs.
Weaknesses: 1. The application of the fractional graph Laplacian operator results in the loss of the graph's sparse property, transforming it into a dense even complete graph.
2. The paper's primary contribution appears to target oversmoothing issues, as suggested by the title. However, the evidence supporting the claim that the fractional graph Laplacian can mitigate oversmoothing in GNN is insufficient.The paper posits that the theory and experimental results presented counteract the issue of oversmoothness. However, based on [1], oversmoothness can still occur even when node features exist within an invariant space that has a rank greater than one. In this context, the paper's conclusions about oversmoothness mitigation defined using Dirichlet energy appear quite constrained. How would the proposed model fare with 256 layers stacked? How does it measure against the ODE-based methods (some are also designed to address oversmoothing), such as the recent work like GRAND[2], GraphCON [3], GRAFF [4], GRAND++[5] and GREAD [6]?
3. While the paper introduces concepts pertaining to directed graphs, the significant Theorem 5.3 only applies to undirected graphs, and Theorem 5.5 is solely restricted to the standard Laplacian where $\alpha=1$. Although it seems that the Fractional Graph Laplacian (FGL) may bridge the gap between homophilic and heterophilic graphs, the benefits of using FGL for directed graphs aren't clearly established.
4. The authors introduce several definitions, notably the Dirichlet energy for a directed graph (equation 1), but fail to provide solid justification for their choice. For instance, one could question why the energy is not defined in the following way:
$$\sum_{i, j=1}^N a_{i, j}\left(\left\|\frac{\mathbf{x}_i}{\sqrt{d_i^{\text {in }}}}-\frac{\mathbf{x}_j}{\sqrt{d_j^{\text {in }}}}\right\|_2^2 +\left\|\frac{\mathbf{x}_i}{\sqrt{d_i^{\text {out }}}}-\frac{\mathbf{x}_j}{\sqrt{d_j^{\text {out }}}}\right\|_2^2\right).$$
The experimental results provided do not sufficiently substantiate these definitions or the theoretical assertions made in the paper.
5. Very poor experimental results: While the paper focuses on directed graph Laplacian, the enhancements it introduces seem to be quite narrow, especially when considering Table 2, the only table for directed graphs. The inclusion of several definitions and theories seems to add complexity to the paper without providing a substantial contribution to the GNN community. The authors also need to include the SOTA results on those datasets.
6. The usage of complex numbers, as in equation (5), should be justified convincingly. While this may be commonplace in pure mathematics or physics, it needs to be shown why such a model is necessary for tackling oversmoothness in GNNs. Recent developments, such as Pytorch's support for complex numbers, do not automatically warrant their use in all contexts. If equation (5) is utilized, it only increases the model's complexity and memory consumption. Therefore, a robust justification is needed.
[1] Oono, Kenta, and Taiji Suzuki. "Graph neural networks exponentially lose expressive power for node classification." arXiv preprint arXiv:1905.10947 (2019).
[2] Chamberlain, Ben, et al. "Grand: Graph neural diffusion." International Conference on Machine Learning. PMLR, 2021.
[3] Rusch, T. Konstantin, et al. "Graph-coupled oscillator networks." International Conference on Machine Learning. PMLR, 2022.
[4] Di Giovanni, F., Rowbottom, J., Chamberlain, B. P., Markovich, T., and Bronstein, M. M. (2022). Graph Neural Networks as Gradient Flows: Understanding Graph Convolutions via Energy. arXiv: 2206.10991 [cs, stat].
[5] Thorpe, Matthew, et al. "GRAND++: Graph neural diffusion with a source term." International Conference on Learning Representations. 2021.
[6] Choi, Jeongwhan, et al. "GREAD: Graph Neural Reaction-Diffusion Networks." (2023).
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: 1. In Table 1, could you provide the learned values for $\lambda_{K}(W)$ and $\lambda_{1}(W)$ for each dataset? Moreover, could you offer a detailed analysis of how these results relate to Theorem 5.3? Also, all the learned $\alpha$ values in Table 4 are greater than 0, so according to part(i) of Theorem 5.3, the FGL-ODE is still either L-FD or H-FD. Could you clarify under which circumstances the proposed $\lambda$-FD could be applied?
2. The paper doesn't clearly illustrate how the fractional Laplacian operator mitigates the oversmoothing issue, both from theoretical and experimental perspectives. More in-depth explanations and experiments are needed.
3. Is the FGL-based Neural ODE applicable to large-scale graph datasets, such as those in references [1] and [2]?
4. The results for the Cora dataset presented in Table 1 are inferior to MLP. Could you also display the results for the homophilic dataset using the data split method from reference [5]?
5. Table 1 seems to miss important baselines for heterophilic graphs, such as ACM-GNN[3] and ACMP[4]. It would be beneficial to include these.
6. The paper lacks ablation studies. Could you provide the node classification results for $\alpha=1$ in equations (4) and (5) of FLODE? As the main contribution appears to be the FGL, are there any ablation studies that use layer-wise FGL-based GNNs instead of the FGL Neural ODE?
7. Other questions please refer to weakness part.
[1].Hu W, Fey M, Zitnik M, et al. Open graph benchmark: Datasets for machine learning on graphs[J]. Advances in neural information processing systems, 2020, 33: 22118-22133.
[2].Lim D, Hohne F, Li X, et al. Large scale learning on non-homophilous graphs: New benchmarks and strong simple methods[J]. Advances in Neural Information Processing Systems, 2021, 34: 20887-20902.
[3].Luan S, Hua C, Lu Q, et al. Revisiting heterophily for graph neural networks[J]. arXiv preprint arXiv:2210.07606, 2022.
[4].Wang Y, Yi K, Liu X, et al. ACMP: Allen-Cahn Message Passing for Graph Neural Networks with Particle Phase Transition[J]. arXiv preprint arXiv:2206.05437, 2022.
[5]. Shchur O, Mumme M, Bojchevski A, et al. Pitfalls of graph neural network evaluation[J]. arXiv preprint arXiv:1811.05868, 2018.
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 2 fair
Limitations: Yes, the authors addressed the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We are very grateful to the reviewer for the time taken to carefully assess our work and for the valuable feedback. We address each point individually. “W/Q” numbers the weakness or question, followed by our response.
---
**W1**: Using FGLs does not lead to a loss of sparsity across all graphs: the graph's density incrementally increases as $\alpha$ goes to 0, see Appendix A, Fig. 6. Moreover, we've demonstrated in Thm. 4.1 that the FGL still respects the topology of the underlying graph. We also argue that adding some “virtual” edges can be advantageous for heterophilic graphs as nodes far apart in the input graph can directly interact.
---
**W2**: The notion of oversmoothing (OS) in [1] and the Dirichlet energy are intimately connected, see [Cai et al., 2020]. When the invariant subspace is the kernel of the SNL this corresponds to OS (wrt the decay of the Dirichlet energy). While the invariant subspace is always the kernel in standard GNNs, we show that this is not true for our method! This mitigates *provably* OS. To more directly address your query, Fig. 1 in the supplementary pdf illustrates that the performance of our model does not deteriorate with 256 layers. Please also note that Dirichlet energy is a standard notion to measure OS, see, e.g., [3,4] that you mention.
Regarding the comparisons, [2,4] are already included in Tab. 1, and we add [3,5,6] to the already discussed models designed to mitigate OS, see also Point 4 in the general response.
---
**W3**: We extend Thm. 5.5, see Point 3 in our general response.
The efficacy of the FGL for directed graphs is substantiated in two key ways. 1) FGLs exhibit the ability to directly capture long-range dependencies by adding “virtual” edges, see also our answer to **W1**, 2) we present ablation studies on the effect of the FGL, see Point 1 in the general response. All ablation studies show that the combination of the FGL and the ODE framework improves the performance on directed graphs.
We hope that this is enough evidence for the benefits of using FGL in directed graphs.
---
**W4**: The justification for our def. of Dirichlet energy is actually elaborated on lines 103 ff., immediately after its introduction. Additionally, our defs. of the Dirichlet energy (Def. 3.1), SNA, and SNL (Def. 3.2) and their relationship to the Dirichlet energy (Prop 3.6) in directed graphs recovers the standard defs. in the undirected case, see, e.g., [Cai et al., 2020].
---
**W5**: We would like to respectfully clarify a few points. In addition to Tab. 2, Fig. 3 also presents results for 11 directed graphs.
Our model demonstrates superior performance on 4 out of 9 real-world datasets, and is on pair with the SOTA on the 11 directed graphs in Fig. 3 (see Tab. 5 in Appendix F.2). Moreover, our approach does not rely heavily on hyperparam. tuning and avoids unnecessary complexity in the GNN layers (jumping knowledge, non-linearities, dropout, or positional encodings), maintaining the comprehensibility and relevance of our theoretical results. This is in contrast to the models we compare with, e.g., GREAD, GraphCON, GRAND.
We believe our contribution to the GNN community lies in our novel approach to understanding directed graphs and fractional Laplacians, an area that has been less explored in the past. Our work can provide a foundation for further exploration and advancements in this area.
Regarding SOTA results, we welcome suggestions on methods to include in our revision.
---
**W6**: Our justification is straightforward: complex numbers are natural when dealing with directed graphs, as eigenvalues and eigenvectors can be complex. Also one the ODEs we consider, the Schrödinger equation, is inherently complex. Moreover, the complexity of the GNN layers scales linearly to the hidden dimension, i.e., complex numbers are not problematic at all. Thus, the use of complex numbers in our work is rooted in necessity and applicability, rather than the availability of software support. This was already used by related works on directed graphs, e.g., MagNet [Zhang et al., 2021].
---
**Q1**: Please see Point 2 in the general response for the first two questions. Regarding $\lambda$-FD: as noted in Fig. 2, graphs with medium homophily could benefit from this. One could for example initialze the exponent with a negative value when the homophily level is known in prior. Our theory offers the backbone for future work to analyze these use-cases.
---
**Q2**: Please see our response to **W2** and Point 2 in the general response. Also, note that we actually *prove* that our approach mitigates OS, see Section 5.
---
**Q3**: Our approach can be scaled using methods such as randomized or truncated SVD. We noted this in Fig. 8 in Appendix A, where 20% of the singular values are already sufficient for SOTA performance.
However, given our present resource constraints we are currently unable to run experiments on such large-scale datasets. We acknowledge this as a significant future direction for our work, though it is beyond the immediate scope of this study.
---
**Q4**: Cora is relatively simple as both, the nodes’ features and their 1-hop neighbourhood, are expressive. However, as you rightly suggested, employing fewer training nodes (as in [5]) does indeed showcase the utility of our method, even without any hyperparameter tuning:
| model | Cora | Citeseer | Pubmed |
|:-:|:-:|:-:|:-:|
| MLP | 58.2±2.1 | 59.1±2.3 | 70.0±2.1 |
| Ours | 77.8±1.1 | 69.1±1.6 | 74.2±2.2 |
We happily incorporate these findings into the paper's appendix.
---
**Q5**: Please see Point 4 in the general response.
---
**Q6**: Please see Point 1 in the general response.
---
We've thoroughly addressed your concerns. Considering the positive feedback from other reviewers and our comprehensive revisions, we respectfully request a reevaluation of your score. Specifically, if any sections still appear unsound or lacking in presentation, please guide us to them.
---
Rebuttal Comment 1.1:
Comment: Thank you for the authors' response. Here is my feedback:
Regarding Q1: However, Figure 2 showcases a synthetic cycle graph. Are there any realistic graphs where applying the $\lambda-FD$ would be necessary? **The gap between your theoretical framework and empirical results is evident**.
For Q3: I find the response insufficiently persuasive. In my view, **the limitations of your method when applied to large-scale graphs are apparent**.
For Q4: The results of your methods are **worse than GCN/GAT** on all Cora/Citeseer/Pubmed Datasets.
For W2/Q2: Referring to Figure 1 in the provided PDF, it's essential to compare your results with other state-of-the-art deep GNNs. Moreover, as highlighted in [1], the squirrel and chameleon datasets contain a significant number of duplicate nodes. Evaluating your model's efficacy primarily based on these two datasets significantly does not provide a comprehensive assessment. You need to refer to the experiment settings in other deep GNN papers.
[1]. Platonov, Oleg, et al. "A critical look at the evaluation of GNNs under heterophily: are we really making progress?." arXiv preprint arXiv:2302.11640 (2023).
---
Reply to Comment 1.1.1:
Comment: We appreciate the feedback from the Reviewer. To address the raised points, we would like to emphasize two critical aspects:
1. We did not expect to achieve state-of-the-art performance on Cora or other homophilic graphs where node features with low Dirichlet Energy show promising results.
2. The essence of our work lies in introducing the fractional Laplacian, its theoretical advantages over standard Laplacians, and its demonstrable improvement over the standard Laplacian and flexibility across various graph datasets. Therefore, our emphasis is not solely on our performance metrics on Cora but instead on the broader contributions we present.
---
Regarding Q1: We kindly but strongly disagree with the Reviewer's statement: "The gap between your theoretical framework and empirical results is evident." Our theoretical analysis is verified by the experiments, and vice versa, the experiments are fully backed up by our theoretical analysis, as shown in our general response (Point 2) and the supplementary pdf (Tab. 2, Fig. 1). We also back this up with Figs. 6 and 7 in our appendix and our ablation studies (see Point 1 in our general response).
We have conducted a complete analysis w.r.t. the exponent since, in our model, $\alpha$ is an unconstrained learnable parameter. Hence, it could also be negative.
The synthetic cycle graph serves as one illustrative example. Another example is Squirrel (e.g., Fig. 1 in the supplementary pdf), where the learned exponent is negative at depth 2. **The fact that the model only sometimes learns negative exponents does not change our alignment of theory and practice.**
---
Regarding Q3: The limitations of our method are indeed clear, as stated in the "Limitation" section of the main paper.
However, our model already comfortably accommodates medium size graphs of 20,000 nodes. Handling large-scale graphs is currently beyond the scope of our work. We would like to highlight that **before our work, the application of fractional Laplacians was restricted to tiny graphs (a few hundred nodes) due to the computation of Jordan decompositions.**
---
Regarding Q4: Please note that in the main paper, our model performs better than GCN/GAT on both Citeseer and Pubmed. As of Cora, we perform slightly better than GAT but slightly worse than GCN.
As previously clarified, we did not hyperparameter tune for that particular experiment. We rather followed the request of the Reviewer and demonstrated that in a low-labeling regime, our model substantially outperforms MLPs. The experiment was not meant to chase SOTA in this new split scenario, as it would have required more time for hyperparameter tuning. It is crucial to understand that **we do not assert superiority over smoothing models like GAT or GCN on Cora, where node features with low Dirichlet Energy show promising results.** At the same time, our model breaks down to GCN (with skip connection and shared weights) if $\alpha=1$. We thus expect that, given the correct hyperparameters, FLODE does not perform significantly worse.
---
For W2/Q2: Fig. 1 is an **ablation study** on the effect of different update rules on the test accuracy. More specifically, we wanted to show how the choice of learnable exponent and skip connections influence the performance. Therefore, the comparison with SOTA GNNs is not pertinent. Please note that we compare against SOTA GNNs in Tab. 1 and 2 in the main paper, and in the general response.
Chameleon, Squirrel, and Actor are widely accepted as standard heterophilic datasets in the GNN field, often used in studies at top-tier conferences like [1] and [2]. Even if there are duplicate nodes, these datasets are still challenging due to their heterophilic nature. It is worth noting that none of the papers the Reviewer suggested for comparison consider the alternative datasets that the Reviewer proposes in their most recent answer.
---
[1] Di Giovanni, F., Rowbottom, J., Chamberlain, B. P., Markovich, T., and Bronstein, M. M. (2022). Graph Neural Networks as Gradient Flows: Understanding Graph Convolutions via Energy. arXiv: 2206.10991 [cs, stat].
[2] Choi, Jeongwhan, et al. "GREAD: Graph Neural Reaction-Diffusion Networks." (2023). | Summary: The paper extends the notion of Dirichlet energy to define oversmoothing in directed graphs. It introduces a fractional Laplacian operator that is used to create graph neural ODE architectures. The resulting designs are non-local and can alleviate oversmoothing. The paper provides empirical results (accuracy) on nine real-world benchmarks for node classification.
Strengths: * Overall, the paper is well-written and easy to follow. Also, the proposed framework (FLODE) is simple;
* The proposed method works relatively well for both directed and undirected graphs (flexibility).
Weaknesses: - Novelty: I rank the novelty as moderate. First, directed symmetrically normalized laplacian has been previously used. Moreover, it seems that the advantages come mainly from the choice of diffusion operator. However, it is clear that the choice of diffusion operator impacts oversmoothing, and we can design, for instance, high-pass (or non-local) graph filters to alleviate oversmoothing. Last, there is no novelty in the evaluation setup.
- Theory: Some theoretical results focus on the properties of the fractional Laplacian, which appears only complementary to the paper. As a result, the paper does not discuss empirical results in light of the theoretical ones.
- Experiments: The empirical results are not strong. Except for Squirrel, the performance gains (over the second-best baseline) do not seem to be statistically significant. Also, the proposed model is the best-performing model only in 4 out of 9 datasets. In addition, the paper does not consider large-scale datasets from OGB, for instance.
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: **Questions and comments**
1. Why is it called Fractional graph laplacian if it is based on the decomposition of the normalized adjacency (SNA)?
2. Do the authors have any intuition to explain why the proposed method works so well on the Squirrel dataset?
3. Some parts could be improved for precision and clarity, e.g.,
- In the paragraph 'Homophily and Heterophily', the paper uses $i \in V$ to denote the distribution used in the expectation. What does it mean (e.g., uniform dist. over the nodes)?
- In the introduction, the paper says: "a GNN that minimizes the Dirichlet energy is expected to perform well on homophilic graphs...". However, there is no dependency on the GNN --- x is defined as initial node features.
5. Running the ablation study on Appendix F3 to more datasets would be interesting and helpful to validate some intuitions behind the proposal.
6. There is a vast literature on methods for tackling oversmoothing in GNNs (DropEdge, etc.), enabling such approaches for heterophilic datasets. Shouldn't these approaches also be considered as baselines for comparison?
7. Have you tried employing an NN instead of using just a channel mixing matrix?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 3 good
Contribution: 2 fair
Limitations: The authors discuss the limitations of the proposal, mainly mentioning the computation cost associated with SVD.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We are very grateful to the reviewer for the time taken to carefully assess our work and for the valuable feedback. We address each point individually. “W/Q” numbers the weakness or question, followed by our response.
---
**W1**: Please note that our work is the first in-depth theoretical analysis of the directed SNA, tying it to Dirichlet energy. We introduce a new def. for the fractional graph Laplacian (FGL) specific to directed graphs and present novel theoretical results.
Regarding your comment on the choice of diffusion operator: we agree that the operator selection can have an impact on OS. However, it's not a foregone conclusion that just any diffusion operator can alleviate this issue. Through our work, we've demonstrated, both theoretically and empirically, that the FGL *combined* with the ODE framework is capable of mitigating OS. However, this does not necessarily hold for the FGL without the ODE framework. To further clarify this point, we have included new additional ablation studies (see Point 1 in our general response, Figure 1 and Table 1 in the supplementary pdf), complementing the one we conducted on the Chameleon dataset (refer to Appendix F.3).
---
**W2**: Our theoretical insights are not mere supplements but rather fundamental components of our work. They underline how the FGL, used in conjunction with graph ODEs, is adept at capturing long-range dependencies in both directed and undirected graphs. This conclusion stems from Thms. 4.1 and 5.3-5.5; the theoretical results from Section 3 are fundamental for the proofs of the above-mentioned thms.
To further emphasize this point, we have made efforts to closely associate our empirical results with our theoretical ones. See Points 1 and 2 in our general response.
---
**W3**: We value your feedback but have a different perspective on our empirical results. Our method excels on 4 out of 9 datasets and performs well especially on heterophilic graphs, ranking best on 3/6 and top three on 5/6. Additionally, our performance on the directed datasets (Fig. 3) is competitive with the SOTA.
What distinguishes our approach is its simplicity. Unlike many models, ours doesn't rely on layer normalization, positional encodings, jumping knowledge, rewiring, dropout, non-linearity in the GNN, or extensive hyperparameter searches. Our ablation study underscores how our model's performance stems from our simple design choices.
On large-scale datasets, while our method holds potential for scalability through truncated SVD, our hardware constrains us from effectively handling datasets with 100K+ nodes. Nevertheless, our current results and theory pave the way for future advancements, especially around scaling.
---
**Q1**: The term "Fractional Graph Laplacian" refers to our approach to applying the fractional power of a Laplacian. The term "Laplacian" is often used in graph theory to denote a matrix representation of a graph. Our approach of taking fractional powers in the singular value domain generalizes to any graph laplacian, not only the SNA or the SNL, hence the name. We hope this clarifies our choice, and we remain open to alternative suggestions!
---
**Q2**: The Squirrel dataset is heterophilic, where nodes of different classes connect. Standard GNNs often underperform on such datasets due to OS. Our method mitigates this, explaining our enhanced performance over baselines on Squirrel. Yet, OS isn't the sole challenge in graph learning; "oversquashing" is another, preventing long-range node information transfer. [Song et al., 2023] suggest Squirrel experiences oversquashing. Our model's success might also stem from the virtual edges introduced by the FGL, possibly addressing oversquashing. However, this remains speculative without formal proof.
Additionally, we present ablation studies on “Squirrel”, see Point 1 in the general response.
[Song et al., 2023] Song, Yunchong, et al. "Ordered GNN: Ordering Message Passing to Deal with Heterophily and Over-smoothing." (2023).
---
**Q3**: Thank you for your feedback. To address your concerns:
a) Yes, we mean a uniform distribution over nodes. We've now opted for a simple sum over the nodes to avoid confusion.
b) We clarify the GNN's dependency by stating: "A GNN whose output node features minimize the Dirichlet energy is expected to perform well on homophilic graphs."
If there are other areas needing clarity, please let us know. Your assistance is vital in ensuring that our work is both comprehensive and accessible to all readers.
---
**Q4**: We appreciate your suggestion. In response, we've included a more comprehensive ablation study for the Squirrel dataset, as detailed in the supplementary pdf (see Tab. 1). This extended analysis should provide additional validation for our method.
Additionally, we added ablation studies that show that our method can scale to large depths and validate our theoretical results (see Points 1-2 in the general response and Fig. 1 in the supplementary pdf).
---
**Q5**: See Point 3 in our general response.
---
**Q6**: We have explored other learnable parameters beyond a diagonal channel mixing matrix, including full matrices and time-dependent matrices. These did not have the same theoretical results or empirical performances as our method, which is why we discarded them. However, we ran the experiments, see also our response to Reviewer dNLj.
While it would be possible to apply a neural network, this approach can not be analyzed in our current framework. We believe this could be a direction for future work and we'd be glad to mention this possibility in our conclusion.
---
We have addressed each of your comments and feel our paper stands on solid theoretical and empirical grounds. We would appreciate if you could specify the parts you consider “unsound”, given your score of 2, so we can further refine our work. If our responses have solved your concerns and you believe our paper has improved, we kindly ask you to improve the score.
---
Rebuttal Comment 1.1:
Comment: Thanks for taking the time to answer my questions. I have no further questions/comments. I have read the other reviews and authors' responses and would like to keep my initial score. | Summary: The authors consider the problem of classification on attributed graphs with a geometrical approach. They introduce a fractional graph Laplacian for undirected and directed graphs. They generalize the notion of Dirichlet energy to directed graphs. They study the ODEs based on this Laplacian and prove that their solutions converge to high-, low- or middle-frequency patterns, for undirected graphs and for directed graphs in a particular case. Finally they implement these ODEs as GNNs and assess their performances.
Strengths: I do not know well this topic and can hardly evaluate how significant are the improvements this article brings.
Weaknesses: To my understanding a main point of this article is fractional Laplacian for directed graphs, but theorem 5.5 is restricted to the SNL.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: On formatting: in the current state of the manuscript many intra-links are broken. Some figures of interest for understanding are not in the main text (I did not look at them). Maybe for table 1 the authors could use font formatting (bold, italic, ...) instead of colors.
L. 245 « In accordance with the results in Section 5, we select W as a diagonal matrix. » could the authors develop?
Part 6, for the directed SBM, what are the features?
Out of curiosity: what happens if W is untied, i.e. taken time-dependent or different at each layer? Does one have the learned alpha small (or negative) for heterophilic datasets?
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 2 fair
Limitations: The authors address a main limitation of their model, its computational complexity. They may conduct an experiment showing that truncating the SVD is correct.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We are very grateful to the reviewer for the time taken to carefully assess our work and for the valuable feedback. We address each point individually. “W/Q” numbers the weakness or question, followed by our response.
---
**W1**: We agree that Thm. 5.5 seemed to be restricted, and we have since made improvements to address this concern. We have now generalized Thm. 5.5 to encompass not only $\alpha=1$, but any $\alpha$ as long as the underlying graph has a normal SNA. This was feasible without altering our proofs since any normal SNA is unitary diagonalizable. See also Point 3 in our general response.
Moreover, we would like to clarify some points:
1. Our results not only hold for directed graphs but also for undirected. Moreover, the improvement in performance can also be seen in undirected graphs, see ablation studies or Point 2 in general response.
2. The ability to capture long-range dependencies via the FGL isn't tied solely to Thm. 5.5. The FGL does also add “virtual” edges between long-distant nodes, as described in Section 4. Thus, our approach exhibits its effectiveness on tasks with (directed) heterophilic graphs. We further substantiate this claim with an additional ablation study reported in the supplementary pdf.
---
**Q1**: We apologize for the inconvenience caused by the broken intra-links. They currently refer to the appendix, and this issue will be resolved in the camera-ready version, as the main manuscript and appendix can be combined into one document.
Regarding the Figs. you've mentioned, we would greatly appreciate if you could specify which ones you are referring to, so we could potentially move them to the main text, considering the availability of an extra page for the camera-ready version. Based on the suggestions of the other Reviewers, we move our ablation studies to the main paper, however, we are open for discussion if you believe other Figs. are even more important for understanding our points. Lastly, we thank you for your suggestion about the color usage in Tab. 1. We will underline the first place, bold for the second, and italic for the third, to ensure better readability.
---
**Q2**: In Thm. 5.3 and Thm. 5.5, our theoretical findings are valid for symmetric and diagonal channel mixing matrices respectively. To align with these theoretical insights and ensure their applicability to our model, we selected $\mathbf{W}$ to be a diagonal matrix in all our experiments. It's worth noting that all diagonal matrices inherently possess symmetric properties, which makes them suitable for satisfying the conditions of both Thms. We will clarify this in our revised paper.
---
**Q3**: For the directed SBM, we followed the original setup used in the MagNet paper, where the features are one-dimensional and are sampled from the standard Gaussian distribution.
Making $\mathbf{W}$ time-dependent or distinct at each layer is certainly intriguing. We conducted a set of experiments with this modification to our model and found that it yielded comparable performance. Moreover, the exponent does display similar tendencies. Specifically, for heterophilic graphs (Chameleon and Squirrel), the learned exponent is small. Meanwhile, for homophilic graphs, the learned exponent remains close to $1$. Notably, Film presents an anomaly — there's a drop in accuracy, and its learned exponent doesn't diminish, despite it being a heterophilic graph.
| | | Film | Squirrel | Chameleon |Citeseer | Pubmed | Cora |
|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|
|Undirected| Accuracy |34.72±1.21|63.49±1.66|73.42±2.0|78.19±2.55|88.96±0.35|86.66±1.09|
| | $\alpha$ |0.92±0.01|0.30±0.04|0.38±0.06|0.97±0.007|0.98±0.05|0.95±0.02|
|Directed|Accuracy|35.27±1.33|73.29±1.62|78.79±1.86|-|-|-|
| | $\alpha$ |0.94±0.004|0.25±0.08|0.25±0.08|-|-|-|
---
However, our theoretical analysis regarding OS would not directly extend to this adjusted model. Analyzing solutions for time-dependent ODEs is more challenging. Since one of our goals in this work was to maintain a tight relationship between theory and practice, we chose to share $\mathbf{W}$ among all layers. Nevertheless, we are happy to include our experimental results using an untied $\mathbf{W}$ in the appendix and identify the investigation of such time-dependent neural ODEs in both theory and practice as a promising avenue for future work. Finally, to facilitate further research in this direction, we will update our code to include an ``--no_sharing`` option. This will allow other researchers to easily explore these questions using our open-source code.
---
**Limitations**: By employing a truncated SVD and retaining $k$ singular values, we achieve the optimal $k$-rank approximation of the fractional Laplacian, as supported by the Eckart–Young–Mirsky theorem. As shown in Appendix A, Figure 8, even when using a truncated SVD that captures only 24% of the singular values, we still attain the same SOTA performance on Chameleon as with the full SVD. In the revised version of our paper, we will clarify this point and provide a reference for the corresponding theoretical result.
---
We appreciate your thorough feedback and have incorporated all suggested changes. Given our revisions and the excellent rating you provided for the presentation, we kindly ask you to reconsider the scores for soundness, contribution, and overall score. We emphasize our novel approach in introducing a FGL, generalizing Dirichlet energy to directed graphs, ensuring that our models can mitigate OS and that our strong experimental results are fully backed up by the theoretical ones. We hope that this convinces you of our strengths.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for their detailed answers and additional experiments.
> Regarding the Figs. you've mentioned, we would greatly appreciate if you could specify which ones you are referring to...
I had in mind fig. 4 which is mentionned twice in the main text. | Summary: The authors propose a neural ODE (FLODE) that uses a fractional graph Laplacian as an alternative to GNNs, which famously suffers from oversmoothing after a few layers. The heat equation $x'(t) = -L^{\alpha}x(t)W$ is shown to possess nice qualities wrt Dirichlet energy such that it does not always end up with a low frequency solution (i.e. oversmoothing as $t \to \infty$). Several experiments are done with synthetic, real, directed, and undirected graphs, and the proposed method are shown to perform comparably with existing method.
Strengths: Even though I am not an expert in neural-ODE based GNNs, I found the paper clearly written and easy to follow. The strength of this paper is in its theoretical analysis of the spectral values of their solutions. Based on a cursory reading of Di Giovanni et al (2022), and given that the learned $\alpha$s are often not 1 (Appendix Table 4), this does not seem to be a trivial extension of existing work.
Weaknesses: Please see questions below.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: 1. If Film is a heterophilic graph, I expected the learned exponent to be closer to 0, which is the case with Squirrel and Chameleon. Do the authors have a hypothesis about why this might be the case?
2. Similarly, do the authors have a hypothesis about why FLODE was not in the top three performing models for Film, Pubmed and Cora?
3. Have the authors actually inspected the eigenvalues and singular values of their models and compared them to the theoretical results?
4. I recommend using boldface and italics in addition to color in Table 1. In Figure 3, the line for FLODE should be thicker or have a different line style.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 3 good
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We are very grateful to the reviewer for the time taken to carefully assess our work and for the valuable feedback. We address each point individually. “W/Q” numbers the weakness or question, followed by our response.
---
> **Q1**: “If Film is a heterophilic graph, I expected the learned exponent to be closer to 0, which is the case with Squirrel and Chameleon. Do the authors have a hypothesis about why this might be the case?”
Film is a notably challenging dataset, where GNN-based models often struggle to significantly outperform models that do not incorporate the graph structure, such as MLPs. We tried to initialize the exponent $\alpha$ to smaller values, however, the learning algorithm did not adapt $\alpha$, and also the performance did not improve. There could be several reasons for this.
1. Film is by far the sparsest graph among the datasets we used, which could potentially impede the propagation of information.
2. The features in this dataset may be less expressive (with 932 dimensions compared to 2325 and 2089 for Chameleon and Squirrel, respectively).
3. It is a class-imbalanced graph, which makes predictions more challenging.
4. The Film dataset represents a subgraph of a larger real-world Wikipedia graph, which may result in a loss of critical information.
Nevertheless, these are mere hypotheses and the definitive cause remains unclear.
---
> **Q2**: “Similarly, do the authors have a hypothesis about why FLODE was not in the top three performing models for Film, Pubmed and Cora?”
Please note that FLODE is among the best models on the directed version of the Film dataset, see Tab. 2. For Pubmed and Cora, we suspect that our model's performance may have been affected by our limited computational resources and a consequently restricted hyperparameter search. Importantly, our model essentially reduces to GRAFF when we set $\alpha=1$. Therefore, given the same hyperparameters, we should always be able to match or outperform GRAFF. We are optimistic that with more extensive tuning, our model would improve its standings on these datasets.
Finally, we want to mention that Pubmed and Cora are highly homophilic datasets, where standard GNNs already perform very well as the 1-hop neighborhood is already very expressive. Hence, we do not expect improvements by using FLODE as long-range dependencies are probably not that important.
---
> **Q3**: “Have the authors actually inspected the eigenvalues and singular values of their models and compared them to the theoretical results?”
We have examined the eigenvalues of our models and compared them to our theoretical predictions, see Point 2 in our general response. These detailed results are included in the supplementary pdf, see Tab. 1 and Fig 1. The observed parameters align perfectly with our theoretical results, indicating that our method can adaptively learn to exhibit HFD behavior, particularly in heterophilic settings. This supports our claim that our proposed model dynamically adjusts its behavior according to the data.
---
> **Q4**: “I recommend using boldface and italics in addition to color in Tab. 1. In Fig. 3, the line for FLODE should be thicker or have a different line style.”
We appreciate your suggestions. We will use underline, boldface, and italics in addition to colors in Tab. 1 to ensure its accessibility. Also, in Fig. 3, we will make the line for FLODE thicker for better visibility.
---
We value your thorough review and positive feedback on our work, especially given the depth of our theoretical analysis, which you highlighted. In light of our clarifications and the strengths you've identified, we believe that our work represents a significant advancement in addressing the oversmoothing issue in GNNs, backed by a solid theoretical foundation. We hope that our explanations and additional ablations further underscore the connection between our strong experimental results and our theoretical framework, convincing you even more.
---
Rebuttal Comment 1.1:
Comment: Thank you for the additional experiments and details addressing my and other reviewers' comments. I don't have any further questions. | Rebuttal 1:
Rebuttal: We thank the reviewers for their thorough and insightful remarks. We fully implemented all remarks in the revised version of the paper, which we believe improved the paper significantly.
**1. More extensive ablation studies**
To augment our ablation studies on Chameleon and Citeseer (see Appendix F.3), we perform several new experiments, the results of which can be found in the new supplementary pdf. First, we carried out a comprehensive ablation study on “Squirrel”, see Tab. 2. Here, we systematically remove various parts of our model (the learnable exponent, the ODE, both, and the complex weight matrix). The results show a significant drop in performance when any of these components are removed, reinforcing the fact that our model's convincing performance is indeed a result of the combination of the fractional Laplacian and the neural ODE framework.
Upon request of the Reviewers, we conduct another ablation study to investigate the role of depth on Chameleon, Citeseer, Cora, and Squirrel datasets. The results are depicted in Fig. 1 of the supplementary pdf. The results demonstrate that the neural ODE framework enables GNNs to scale to large depths (256 layers). Finally, we see that the fractional Laplacian improves over the standard Laplacian in the heterophilic graphs which is supported by our claims in the main paper. We highlight that using only the fractional Laplacian without the neural ODE framework oftentimes outperforms the standard Laplacian with the neural ODE framework, see ablations in Tab. 2 in the supplementary pdf and Tab. 7 in Appendix F.3. This indicates the importance of the long-range connections built by the fractional Laplacian.
Please also be aware that Cora, Citeseer, and Pubmed are graphs with high homophily levels. While we do not anticipate improved performance on these graphs (as capturing long-range dependencies is less important on the graphs), we also do not expect a performance drop. For instance, we could simply set $\alpha=1$, which should ensure that we achieve at least the same performance as GRAFF, provided the hyperparameters are equal.
**2. Highlighting the intimate connection between our theoretical and experimental findings**
We further demonstrate the close alignment of our theoretical and experimental results in Fig. 1 of the supplementary pdf, which enables us to precisely anticipate when the models will exhibit HFD or LFD behaviors. In this context, we calculated parameters (according to Thm. 5.3) and illustrated at each depth the expected and observed behaviors. For Squirrel and Chameleon, which are heterophilic graphs, we observe that both their theoretical and empirical behaviors are HFD. Additionally, the learned exponent is small. In contrast, for Cora and Citeseer, we see the opposite.
We have also incorporated the calculated parameters (according to Thm. 5.3) for Tab. 1 in the main (see Tab. 1 in the supplementary pdf). Here, we employ the best hyperparams to solve both fractional heat and Schrödinger graph ODEs., further substantiating the intimate link between our theoretical advancements and practical applications. This tight connection provides strong validation for our theoretical framework, indicating its utility in effectively guiding and predicting the behavior of GNNs in practice.
Please also note that we could enforce LFD/HFD behavior by using diagonal channel mixing matrices with only positive/negative entries. However, this doesn’t lead to the same empirical results which may be due to the limited expressivity of the resulting network.
**3. Generalization of Thm. 5.3**
We appreciate the insights provided by two of the reviewers and generalize Thm. 5.3 to any directed graph with normal SNA. The updated version of the Thm. will be included in the revised paper, and the existing proof remains valid as any normal matrix is unitarily diagonalizable, allowing the same proof technique used in Thm. 5.3. We regard further generalizations to arbitrary directed graphs as an intriguing open question for future exploration.
We thank the reviewers for inspiring this straightforward but crucial extension of Thm. 5.3.
**4. Inclusion of baseline models for experiments**
In response to the insightful suggestions from several reviewers, we have expanded our set of compared models (see Tab. below) to include recent neural ODE-based GNNs and other methods designed to mitigate oversmoothing (OS). These additions encompass GREAD, GraphCon, ACMP, and GCN/GAT with DropEdge. Please note that for models not evaluated using the same (standard) splits, we have run their code on these standard splits, a detail we will highlight with a footnote in our revised paper.
The results will be added in our paper:
| Model | Film | Squirrel | Chameleon | Citeseer | Pubmed | Cora |
| - | - | - | - | - | - | - |
| GREAD [Choi et al., 2023] | **37.90±1.17** | 59.22±1.44 | 71.38±1.3 | 77.60±1.81 | **90.23±0.55** | **88.57±0.66** |
| GraphCon [Rusch et al., 2022] | 35.58±1.24 | 35.51±1.40 | 49.63±1.89 | 76.36±2.67 | 88.01±0.47 | 87.22±1.48 |
| ACMP [Wang et al., 2023] | 34.93±1.26 | 40.05±1.53 | 57.59±2.09 | 76.71±1.77 | 87.79±0.47 | 87.71±0.95 |
| GCN+DropEdge [Rong et al., 2020] | 29.93±0.80 | 41.30±1.77 | 59.06±2.04 | 76.57±2.68 | 86.97±0.42 | 83.54±1.06 |
| GAT+DropEdge [Rong et al., 2020] | 28.95 ± 0.76 | 41.27±1.76 | 58.95±2.13 | 76.13±2.20 | 86.91±0.45 | 83.54±1.06 |
| Ours (This work) | 37.16±1.42 | **64.23±1.84** | **73.60±1.55** | **78.07±1.62** | 89.02±0.38 | 86.44±1.17 |
We also move the paragraph detailing our baseline models from Appendix F.1 (lines 704 ff.) to the main text, improving clarity on our selection of robust baseline models for heterophilic graphs. Given the reviewers' emphasis on baseline models addressing oversmoothing — a concern already addressed in our initial submission — this shift seemed important.
---
We address the remaining concerns as part of the response to each Reviewer.
Pdf: /pdf/019cee8c378021156996acd6758fc2fb8990d366.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Inferring Hybrid Neural Fluid Fields from Videos | Accept (poster) | Summary: The paper proposes a novel approach for recovering the density and velocity fields of inviscid fluids from sparse multiview videos. The model has two main contributions:
- First, it incorporates physics-based losses to enforce the inference of a physically plausible velocity field that is divergence-free and drives the transport of density. This helps to deal with the visual ambiguities of fluid velocity.
- Second, the model provides a hybrid neural velocity representation, which consists of a base neural velocity field capturing most irrotational energy and a vortex particle-based velocity modeling residual turbulent velocity. This representation enables the recovery of vortical flow details.
Strengths: Originality: This paper primarily tackles the visual ambiguities in inverse rendering techniques for fluid data, focusing on resolving visual ambiguities. The integration of physics-based losses into the volume rendering framework is a rational approach. The results in Figure 8 illustrate the effectiveness of the newly proposed learning constraints in significantly improving the accuracy of the reconstruction. Moreover, the introduction of a hybrid representation of velocity fields and particle-based vortex flow showcases originality in the methodology.
Significance: The proposed model makes a significant contribution to the visual understanding of fluids, particularly smoke, fog, and gas.
Weaknesses: For methodology:
1. One contribution of this paper is to incorporate new forms of physical constraints in the framework of inverse rendering. However, the general idea is not entirely novel as previous work by Chu et al. (2022) has also presented a similar (albeit different) approach, which somewhat weakens the technical novelty of this paper. For example, using the density transport equation from incompressible fluid or NS equation as a loss function has been employed in other related papers. Please refer to the work from Baieri et al. (2023) and Li et al. (2023).
- [Baieri et al., 2023] Fluid Dynamics Network: Topology-Agnostic 4D Reconstruction via Fluid Dynamics Priors. Arxiv, 2023.
- [Li et al., 2023] PAC-NeRF: Physics Augmented Continuum Neural Radiance Fields for Geometry-Agnostic System Identification. ICLR, 2023.
2. The constraint imposed on emitting radiance in HyFluid does present limitations in its applicability to real-world scenarios. The requirement of a constant emitting radiance may hinder accurate recovery in situations where spatially-varying lighting exists in the scene.
Lacking references:
3. Some closely related work seems to be missing, such as NeuroFluid (Guan et al., 2022) and PAC-NeRF (Li et al., 2023), both of which also focus on visual physical inference through inverse rendering. It is important to acknowledge these works as they contribute to the existing body of literature in this field and provide valuable insights and techniques for comparison and benchmarking purposes.
- [Guan et al., 2022] NeuroFluid: Fluid Dynamics Grounding with Particle-Driven Neural Radiance Fields. ICML, 2022.
For experiments:
4. Is the proposed method limited to handling only inviscid fluids? It would be beneficial to evaluate the proposed method in a broader range of fluid scenarios, including different types of flows (e.g., laminar, turbulent) and varying fluid properties (e.g., viscosity, density). This would demonstrate the generalization ability of HyFluid under diverse conditions.
5. Considering that the constraint of constant radiance may not be practical in complex real-world scenes, I highly recommend that the authors compare the reconstruction results obtained with and without the constraint.
6. The proposed model is only compared with two existing models, which is not sufficient. To make the results more convincing, the authors could incorporate more advanced neural rendering techniques designed specifically for dynamic scenes. Additionally, given that the dataset comprises synthetic fluid simulation data, it would be beneficial for the authors to provide quantitative results regarding the reconstruction of the velocity field in comparison to the ground truth on these simulated scenes.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. In the paper, a grid-based representation is used for density and velocity. When implementing $L_{density}$ and $L_{project}$, how exactly are these calculations performed? Are the loss functions computed for all grid positions, or is interpolation used to compute the losses at sampled points in space?
2. What is the difference in performance between decomposing the velocity field into a base neural velocity field and a vortex particle-based velocity, versus solely using high-frequency position embedding?
3. In the case of using physics-based loss functions in HyFluid, how does the predicted velocity field of HyFluid compare to the ground truth (GT) velocity field quantitatively? Can HyFluid be compared to other methods in terms of velocity field quantitatively?
4. The paper does not provide experimental results and discussions regarding the constraint of constant radiance. What is the capability of HyFluid with real-world scenes that exhibit spatially-varying radiance?
5. Can you provide more quantitative/ qualitative results of other baselines?
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your insightful comments and constructive suggestions! Please see our response below.
For methodology:
1. **Technical novelty**:
[Chu et.al.] is indeed most relevant. Our approach incorporates novel losses including projection loss and laminar loss as well as a hybrid representation to capture turbulent velocity fields. Our technical novelty leads to better velocity reconstruction compared to [Chu et.al.] as can be seen in novel view re-simulation and velocity visualization (especially the videos in supplementary material). [Baieri et.al.] and [Li et.al.] focus on general dynamic object and they do not consider complex fluid dynamics such as turbulence. We will add these references and discussion to our paper.
2. **Spatially-varying appearance modeling**:
Following your suggestion, we allow learning spatially and temporally varying appearances. Please see the results and discussion in the "Spatiotemporally varying appearance" in the global response above.
For reference:
3. **References**:
Thank you for the note. We will add these references to our paper. We will also add a clarification that our setting is different from NeuroFluid. NeuroFluid focuses on learning fluid dynamics from a large amount of data. Therefore, they train and evaluate the model on synthetic data. Moreover, they assume known initial state and no inflow source. In contrast, our goal is to reconstruct plausible fluid fields from real sparse multiview videos without assuming any training data.
For experiments:
4. **Different viscosity level**:
Following your suggestion, we include additional synthetic data experiments which have different viscosity levels. Please refer to the "Evaluation on different viscosity levels" in the global response above for results and discussion.
5. **Spatially-varying appearance modeling**: Please see response to 2. above.
6. **Comparison to other existing methods**:
We include NeuroFluid and GlobTrans [Franz2021] as additional compared methods. Please refer to the "Comparison to GlobTrans" and "Comparison to NeuroFluid" in the global response above.
7. **Quantitative results regarding velocity field reconstruction**:
We clarify that our goal is to reconstruct plausible fluid fields from real videos, and thus in our experiments in the main paper **we only use real data** (as specified in L201) which does not provide groundtruth 3D fields but only multi-view videos. To evaluate 3D fields, we include experiments on synthetic examples in the "Evaluation on synthetic data" in the global response.
For questions:
1. **How to compute losses**:
For L_density, in each training step, we randomly select one timestamp (i.e., one frame) and we sample continuous points in 3D space. For these points, we compute the first-order derivatives by auto-gradient computation provided by PyTorch. For L_project, we do not use sampling as our MGPCG solver requires a regular grid. Thus, we use regular grid of 128^3 to solve for projection. These projected velocity vectors at the regular grid points are then used to supervise the velocity network outputs at those exact regular grid points.
2. **Differences compared to solely using high-frequency position embedding**:
Please note that the state-of-the-art neural fluid reconstruction method PINF [Chu2022] uses an advanced high-frequency position embedding [Sitzmann2020] for velocity field. Our comparison to PINF shows that our approach reconstructs better fluid fields with richer vortical details (in particular, please see the re-simulation video in supplementary material).
3. **Comparison to GT velocity**:
Please see our response in 7. above.
4. **HyFluid on real scenes**:
As clarified above, all our experiments use real data which inevitably has spatially-varying lighting due to global illumination. Please also see our response in 2. above for modeling spatially-varying appearance.
5. **More results of other baselines**:
Please refer to our response in 6. above.
**Reference**:
[Chu2022] Chu, M., Liu, L., Zheng, Q., Franz, E., Seidel, H. P., Theobalt, C., & Zayer, R. (2022). Physics informed neural fields for smoke reconstruction with sparse data. ACM Transactions on Graphics (TOG), 41(4), 1-14.
[Sitzmann2020] Sitzmann, V., Martel, J., Bergman, A., Lindell, D., & Wetzstein, G. (2020). Implicit neural representations with periodic activation functions. Advances in neural information processing systems, 33, 7462-7473.
---
Rebuttal Comment 1.1:
Title: Thanks for the reply
Comment: Thank you for the replies to my questions and comments. After reading the other reviews and answers, most of my concerns are addressed. I'll raise my score on this paper and look forward to its future updates with improved content.
---
Reply to Comment 1.1.1:
Title: Thank you for your updated review!
Comment: Dear Reviewer MRuw,
Thank you for kindly updating your review! We will incorporate all the contents in the rebuttal to our revised manuscript.
Sincerely,
Authors of submission 1335
---
Rebuttal 2:
Title: Happy to answer any further questions
Comment: Dear Reviewer MRuw,
Thank you for reviewing our submission. We have posted our response per your suggestions and questions. We are happy to discuss with you and answer any further questions. We are looking forward to your feedback!
Best,
Authors of submission 1335
---
Rebuttal 3:
Title: Looking forward to discussion with you
Comment: Dear Reviewer MRuw,
We have posted our clarification and response. We wish our response can address your concerns and we would like to hear your updated feedback and evaluation. We are more than happy to discuss with you and answer any further questions!
Best,
Authors of submission 1335 | Summary: The paper presents a neural reconstruction method for individual fluid flows which is re-trained for each new scene. It combines the established iNPGs for representing densities and velocities with vorticity transporting partices in a hybrid flow representation called HyFluid. A set of appearance- and physics-based losses is used to train the iNPGs and optimize vorticity and lighting in 3 steps. With this approach it is possible to recover 3D density and velocity from sparse views and enable novel-view rendering, re-simulation, and predictions of states over time.
Strengths: I see the following strong points in this submission:
- The method is build on physical priors to recover a physically meaningful velocity field.
- The loss to the projected velocity to ensure divergence freeness is novel as far as I can tell.
- Using particles to transport voriticity seems novel, at least in that context.
- Provides better results then SOTA neural methods regarding re-simulation and prediction.
- The results look good, the motion of the density is coherent and also suited for re-simulation.
- The reconstruction works with simple, sparse images.
Weaknesses: The treatment for vorticity seems novel, but from the provided ablations I cannot see a substantial impact:
- In Fig. 8 there seems to be a slight improvement.
- In the videos, the ablation study results of "w/o vort." and "full" look very similar.
- The difference could be more visible in the flow field, and but this is not demonstrated.
- In the videos, the novel view synthesis results of PINF [1] and HyFlow are almost identical.
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: - Since the approach is trained on ScalarFlow [2] it would be interesting to see how the method compares to the ScalarFlow reconstructions. ScalarFlow is not using neural networks, and as such a good not-learned baseline as a single scene reconstruction method using physical priors (advection, pressure projection). Re-simulation and future prediction might also be possible there since ScalarFlow is using a fluid solver (MantaFlow) internally. Can the authors explain whether they haven't compared to the ScalarFlow 3D data despite using the 2D input images?
- It would also be interesting to compare to GlobTrans [3] as this method it is trained/optimized as a re-simulation method. Have the authors looked into this?
- The paper uses use an unseen GT view to evaluate the metrics. Have the authors tried/considered reconstructing a synthetic case to be able to make a 3D comparison? This might also show the efficacy of the flow reconstruction more clearly, since the visual appearance could be matched perfectly.
- The velocity flickers in the supplemental video, there is no smooth evolution. Is there any coupling of the velocity over time (like self-advection)?
- In the videos the velocity fields are rendered to 2D. Can the authors explain how is that done? Wouldn't a slice provide more insight in the actual motions? Averaging might blur the small-scale vorticities.
Overall, I think the paper targets an interesting direction and provides an interesting approach for a tough problem. I still somewhat on the edge regarding this paper, but I'm open to readjusting/raising my score after the rebuttal.
References:
[1] Mengyu Chu, Lingjie Liu, Quan Zheng, Erik Franz, Hans-Peter Seidel, Christian Theobalt, and Rhaleb Zayer. 318 Physics informed neural fields for smoke reconstruction with sparse data. ACM Transactions on Graphics, 2022.
[2] Marie-Lena Eckert, Kiwon Um, and Nils Thuerey. Scalarflow: a large-scale volumetric data set of real-world 330 scalar transport flows for computer animation and machine learning. ACM Transactions on Graphics (TOG), 2019.
[3] Erik Franz, Barbara Solenthaler, and Nils Thuerey. Global transport for fluid reconstruction with learned 337 self-supervision. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2021.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 3 good
Contribution: 2 fair
Limitations: Limitations are briefly discussed, I don't see anything significant missing here.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your insightful comments and constructive suggestions! Please see our response below.
- **Comparison to ScalarFlow and GlobTrans**:
Thank you for your suggestion! Since both ScalarFlow and GlobTrans are optimization-based methods, we compare to GlobTrans which is a later work and it has shown better results than ScalarFlow. Please refer to the "Comparison to GlobTrans" in the global response above for results and discussion.
- **Synthetic data for 3D evaluation**:
Following your suggestion, we additionally include synthetic examples using ScalarFlow synthetic dataset generation code. Please refer to the "Evaluation on synthetic data" and "Evaluation on different viscosity levels" in the global response above for results and discussion.
- **Self-advection of velocity**:
We do not include a physical loss for self-advection of velocity like $\mathcal{L}=D\mathbb{u}/Dt-\mathbb{f}$ (where $\mathbb{f}$ denotes external force) as we empirically found that it often leads to oversmoothed velocity fields. This may be due to that the material derivative term for velocity $D\mathbb{u}/Dt=\partial \mathbb{u}/\partial t+\mathbb{u}\cdot\nabla \mathbb{u}$ admits local trivial solutions where the velocity is spatiotemporally constant. In optimization-based methods such as GlobTrans, this local trivial solution is solved by a global optimization. However, in neural continuous reconstruction, this is not straightforward to address and we leave it as future exploration. Actually, we suspect that this is the reason that PINF (which uses a velocity advection loss) reconstructs only laminar flows for real videos.
- **Render velocity field as slice**:
We rendered velocity field to 2D by projecting every 3D velocity vector to the camera plane and then use volumetric rendering to integrate them. This indeed smoothes the visualization. Following your suggestion, we additionally include slice rendering in Figure R6 in the global response PDF. From the comparison we can see that our velocity field recovers more vortical details than PINF. We will include this figure in our revised paper.
---
Rebuttal Comment 1.1:
Title: Post rebuttal
Comment: I thank the authors for the clarifications, good to see the synthetic results. Unlike mentioned in other reviews, this now shows both real and synthetic results, so I’d be happy to support acceptance, and I’ll raise my score.
---
Reply to Comment 1.1.1:
Title: Thank you for your updated review!
Comment: Dear Reviewer 4cV8,
Thank you for kindly updating your review! We will incorporate all the contents in the rebuttal to our revised manuscript.
Sincerely,
Authors of submission 1335
---
Rebuttal 2:
Title: Happy to answer any further questions
Comment: Dear Reviewer 4cV8,
Thank you for reviewing our submission. We have posted our response per your suggestions and questions. We are happy to discuss with you and answer any further questions. We are looking forward to your feedback!
Best,
Authors of submission 1335 | Summary: This paper presents an innovative neural dynamic reconstruction method that achieves good results in recovering fluid density and velocity fields through the introduction of physical constraints and a hybrid neural velocity representation. Despite some weaknesses and issues, this method has significant implications for the in-depth study and resolution of fluid dynamics problems. Future work could focus on refining the method and validating and applying it to a wider range of fluid scenarios.
Strengths: This method proposes a new approach to neural dynamic reconstruction that can simultaneously recover fluid density and velocity fields, overcoming the challenges posed by the visual ambiguities of fluid velocity in existing methods.
Physics-based losses are introduced to enforce a divergence-free velocity field, driving the transport of density and enhancing the accuracy of velocity estimation.
A hybrid neural velocity representation is designed, incorporating a base neural velocity field that captures most irrotational energy and a vortex particle-based velocity that models residual turbulent velocity. This enables the method to effectively recover vortical flow details.
Weaknesses: The paper lacks detailed presentation of experimental results and quantitative evaluations, thus lacking sufficient validation of the method's performance.
There is no comparison with other methods in addressing the visual ambiguity of fluid velocity, making it difficult to assess the method's advantages and disadvantages comprehensively.
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: Further investigation is needed to determine whether the method can handle complex fluid scenarios such as non-Newtonian or multiphase fluids in practical applications.
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 2 fair
Limitations: The paper does not mention the computational resource requirements, such as computation time and memory consumption, which are important factors for the feasibility of practical applications.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your time and comments. Please see our response below.
- **Detailed presentation of experiment results**:
We clarify that we aim at reconstructing plausible fluid velocity field from real videos to allow re-simulation and future prediction. For real videos, it is very challenging to collect groundtruth 3D density and velocity.
Therefore we evaluate on the applications including novel view video synthesis, novel view re-simulation, and novel view future prediction. For each of them, we show detailed quantitative results in Table 1 (three metrics for each task) and qualitative results in Figure 3, 4, 7 in our main paper. In addition, we show qualitative results on turbulence editing and velocity recovery in Figure 5 and Figure 6. These downstream applications reflect that our approach can reconstruct plausible real fluid fields.
- **Additional quantitative evaluation**:
In addition to the experiments on real fluid videos, we include new experiments on synthetic fluid scenes. These scenes provide 3D groundtruth density and velocity, allowing quantitative evaluations on them. Please refer to the "Evaluation on synthetic data" and "Evaluation on different viscosity levels" in the global response for results and discussion.
- **No comparison with other methods in addressing the visual ambiguity of fluid velocity**:
We respectively disagree with this. We clarify that we have comparisons to PINF [Chu2022] and NeRFlow [Du2021], both of which aim to address visual ambiguity of velocity/flow estimation from real videos, and they both showcase plausible reconstruction on fluid scenes in their results. In particular, PINF [Chu2022] is the state-of-the-art fluid reconstruction method which tries to address visual ambiguity by physics-informed losses similar to Physics-informed Neural Networks (PINN) [Raissi2019]. PINF shows extensive results in synthetic scenes and a few real examples. NeRFlow approaches this by a set of temporal consistency losses and showcase fluid reconstruction in their "milk pouring" scene. Our comparison to them in Table 1 and Figure 3, 4, 5, 7 clearly demonstrate that our approach achieves better results than these existing methods.
- **Additional comparison with other methods**:
In addition to both PINF and NeRFlow, we compare to NeuroFluid [Guan2022], and GlobTrans [Franz2021]. NeuroFluid learns fluid dynamics to address velocity ambiguity. GlobTrans aims for reconstructing fluid fields using advection constraints and regularization terms to solve visual ambiguity of fluid velocity. Please refer to the "Comparison to NeuroFluid" and "Comparison to GlobTrans" in the global response for results and discussion.
- **Computation time and memory consumption**:
As we noted in L215 in our main paper, we leave computation resource usage in our supplementary material. As in L29 in our supplementary material, we train our model on a single A100 GPU (the GPU memory usage is around 30GB) for around 9 hours in total.
**Reference**:
[Chu2022] Chu, M., Liu, L., Zheng, Q., Franz, E., Seidel, H. P., Theobalt, C., & Zayer, R. (2022). Physics informed neural fields for smoke reconstruction with sparse data. ACM Transactions on Graphics (TOG), 41(4), 1-14.
[Du2021] Du, Y., Zhang, Y., Yu, H. X., Tenenbaum, J. B., & Wu, J. (2021, October). Neural radiance flow for 4d view synthesis and video processing. In 2021 IEEE/CVF International Conference on Computer Vision (ICCV) (pp. 14304-14314). IEEE Computer Society.
[Raissi2019] Raissi, M., Perdikaris, P., & Karniadakis, G. E. (2019). Physics-informed neural networks: A deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations. Journal of Computational physics, 378, 686-707.
[Guan2022] Guan, S., Deng, H., Wang, Y., & Yang, X. (2022, June). Neurofluid: Fluid dynamics grounding with particle-driven neural radiance fields. In International Conference on Machine Learning (pp. 7919-7929). PMLR.
[Franz2021] Franz, E., Solenthaler, B., & Thuerey, N. (2021). Global transport for fluid reconstruction with learned self-supervision. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 1632-1642).
---
Rebuttal 2:
Title: Happy to answer any further questions
Comment: Dear Reviewer 3dGa,
Thank you for reviewing our submission. We have posted our clarification and response to your suggestions and questions. We are happy to discuss with you and answer any further questions. We are looking forward to your feedback!
Best,
Authors of submission 1335
---
Rebuttal 3:
Title: Looking forward to discussion with you
Comment: Dear Reviewer 3dGa,
We have posted our clarification and response. We wish our response can address your concerns and we would like to hear your updated feedback and evaluation. We are more than happy to discuss with you and answer any further questions!
Best,
Authors of submission 1335
---
Rebuttal Comment 3.1:
Comment: I acknowledge that the author has addressed my concerns well, and I recommend a weak acceptance of this paper.
---
Reply to Comment 3.1.1:
Title: Thank you for updating your review!
Comment: Dear Reviewer 3dGa,
Thank you for updating your review! We will incorporate all the contents in the rebuttal to our revised manuscript.
Sincerely,
Authors of submission 1335 | Summary: This paper works on reconstructing fluid density and velocity from multi-view videos. The main idea is to inject visual clues with NeRF. They propose some physics-based regularization terms to deal with visual ambiguity that video can not reflect the inner fluid states.
Strengths: Good motivation: Visual ambiguity indeed is the critical problem that needs to be solved in this setting.
Novel framework: Introducing physical losses is a not-easy but natural point. The introduced losses are general to inspire other related researchers.
Weaknesses:
1. Setting is not new. Indeed, reconstructing fluid velocity and density from videos has been studied in NeuroFluid (ICML 2022). I didn't find the introduction or comparison. Why?
2. The proposed method is not convincing to me.
- The proposed physics-based losses can only play a regularization role in the training. To solve the visual ambiguity, the framework must introduce direct supervision of the fluid velocity. Nevertheless, ambiguity still exists.
- And, the key question is how to get the initial state of the fluid, which decides the performance of this work. The authors **should** clarity this point.
3. Experiments are incomplete, lacking key results.
- Experiments just show the render results and velocity results. How about density? The authors claim that they can recover fluid density but not shown in the experiments.
- How did you evaluate the render results? Directly render fluid from the view angle used in the training stage? Please claim them before your experiments.
- Only evaluate the stoke with a similar environment. Can you show more examples, with more complex shapes and materials?
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: For details, please refer to the weaknesses section.
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your time and comments! Please see our response below.
- **Relation to NeuroFluid**:
We clarify that our setting is different from NeuroFluid. NeuroFluid focuses on learning fluid dynamics from a large amount of data. Therefore, they train and evaluate the model on synthetic data, yet the generated data distribution can be very different from real data to generalize. Moreover, they assume known initial state and no inflow source, which do not hold in some real scenes such as smoke plume scenes we used. In contrast, our goal is to reconstruct plausible fluid fields from real sparse multiview videos without assuming additional training data, so as to facilitate applications on real fluid videos such as novel view synthesis, re-simulation, future prediction, and turbulence editing.
- **Comparison to NeuroFluid**:
We show a comparison to NeuroFluid in the "Comparison to NeuroFluid" in the global response above. Please note that since we use a real dataset for evaluation (as specified in L201 in our main paper), we do not have groundtruth initial states that NeuroFluid requires as input. Therefore, we use the previous state-of-the-art method PINF [Chu2022] to reconstruct the first frame for the initial state of NeuroFluid. We will add reference and discussion to our paper.
- **Physics-based losses for visual ambiguity and supervision from velocity**:
We clarify that we aim at reconstructing plausible fluid velocity field from real videos to allow re-simulation and future prediction. We do not assume training data of groundtruth velocity as it is very scarce in real scenes. Therefore, we propose physics-based losses to regularize the recovery of fluid velocity such that they are physically plausible for the downstream applications.
- **Initial state of fluid**:
Different from NeuroFluid which requires initial states and learns fluid dynamics, we aim at reconstructing the fluid fields. Thus, the "initial state" is our model output rather than input.
- **Evaluating 3D density**:
We clarify that all our experiments are done on **real videos** which do not have groundtruth 3D density and velocity fields. Therefore, we indirectly evaluate them by downstream applications including novel view synthesis, re-simulation, and future prediction. Following your suggestion, we additionally evaluate our method on synthetic examples which provide groundtruth 3D density. Please refer to the "Evaluation on synthetic data" in the global response above to see the results and discussion.
- **Evaluating rendering results**:
We clarify that we evaluate rendering results in a hold-out novel view that is unseen during training (L205-L206 in the main paper). In particular, each example in the ScalarFlow real dataset and synthetic dataset has 5 views. We take 4 views for training, and 1 view for testing. We use this training-testing split for all our experiments including novel view video synthesis, novel view re-simulation, and novel view future prediction.
- **Experiments on different examples**:
Please note that the additional synthetic data in "Evaluation on different viscosity levels" in the global response demonstrate different material properties (e.g., they are more viscid than real smokes), shapes and inflows. We believe these additional examples provide more diverse evaluations.
**Reference**:
[Chu2022] Chu, M., Liu, L., Zheng, Q., Franz, E., Seidel, H. P., Theobalt, C., & Zayer, R. (2022). Physics informed neural fields for smoke reconstruction with sparse data. ACM Transactions on Graphics (TOG), 41(4), 1-14.
---
Rebuttal 2:
Title: Happy to answer any further questions
Comment: Dear Reviewer qUcR,
Thank you for reviewing our submission. We have posted our response per your suggestions and questions. We are happy to discuss with you and answer any further questions. We are looking forward to your feedback!
Best,
Authors of submission 1335
---
Rebuttal Comment 2.1:
Title: Thanks for your response!
Comment: Dear author,
Thanks for your efforts, which answer most of my questions. But, there are two questions I need to discuss further.
1. I think only testing on the ScalarFlow dataset is limited. Could you compare with PINF on other real scenes, such as predicting water? Otherwise, you can explain why evaluating on the ScalarFlow is sufficient.
Minor: could you please show some image samples of the scene you used?
2. Could you please describe the angle interval of the 5 views in the novel-view synthesis?
Thanks!
---
Reply to Comment 2.1.1:
Title: Thank you for your questions!
Comment: Dear Reviewer qUcR,
Thank you for your further questions. Please find our responses below. We are happy to discuss further if you have any additional questions!
**1. Predicting real water scenes**:
We note that free-surface fluid like water has more complex dynamics and different appearance properties, making it a different problem than gases. In particular:
**Complex dynamics (simulation)**: Free-surface fluid dynamics needs to consider interface dynamics, i.e., how the free surface interacts with its surroundings such as container and air. This also includes complex dynamic phenomena like wave breaking, droplet formation and capillary wave. Moreover, the NS equation holds only within the interior domain, which changes its shape along the fluid evolution.
**Reflective appearance (rendering)**: Water has different appearance properties, such as strong reflection, refraction, and high transparency. This makes it a difficult to be modeled using direct volumetric rendering (We suspect this is why NeuroFluid which also uses volumetric rendering only tests on synthetic semi-opaque water without background or containers).
Given these differences in both simulation and rendering, we consider free-surface fluid out of our scope. We follow existing fluid reconstruction works, which mostly focus on gases. We follow them to test on synthetic scenes and the ScalarFlow real dataset [Eckert2019, Zang2020, Franz2021, Chu2022], as ScalarFlow features a tractable representative real setting. Please also note that we have added experiments on synthetic data which demonstrates different material properties.
**2. Image samples**:
We have uploaded an image example of a ScalarFlow real scene in this anonymous link: https://ibb.co/phFwbNt Please note that this has been post-processed to remove background as done in existing work [Eckert2019, Franz2021, Chu2022].
**3. Angle intervals**:
Please kindly refer to the Figure 1 in this website: https://ge.in.tum.de/publications/2019-scalarflow-eckert/ for the capture setup. The five cameras are placed evenly in a 120 degree arc. We use the middle camera as test view and the other four as training views.
**Reference**:
[Eckert2019] ScalarFlow: a large-scale volumetric data set of real-world scalar transport flows for computer animation and machine learning. TOG2019
[Zang2020] Tomofluid: Reconstructing dynamic fluid from sparse view videos. CVPR2020
[Franz2021] Global transport for fluid reconstruction with learned self-supervision. CVPR2021
[Chu2022] Physics informed neural fields for smoke reconstruction with sparse data. TOG2022 | Rebuttal 1:
Rebuttal: We thank all reviewers for their time and feedback. We clarify that since our goal is to reconstruct plausible fluid fields from real videos, **all experiments in our main paper are on real captured data, as specified in L201**. Please find our summary of major changes and response to some common questions below. We will incorporate these changes to our revised paper.
**Summary of major changes per reviewers' suggestions**:
1. [*qUcR*, *3dGa*, *4cV8*, *MRuw*] We add evaluations on synthetic data that provides groundtruth 3D fields.
2. [*qUcR*, *3dGa*, *4cV8*, *MRuw*] We add evaluations on different viscosity levels using synthetic data.
3. [*5YF8*, *MRuw*] We add ablation study on using spatiotemporally-varying appearance.
4. [*3dGa*, *4cV8*, *MRuw*] We add discussion and comparison to GlobTrans which is a SOTA non-learning-based fluid reconstruction method.
5. [*qUcR*, *3dGa*, *MRuw*] We add discussion and comparison to NeuroFluid which is a recent fluid dynamics learning method.
6. [*4cV8*] We improve velocity visualization using slice rendering.
7. [*MRuw*] We add more references and discussion to recent related work.
**[*qUcR*, *3dGa*, *4cV8*, *MRuw*] Evaluation on synthetic data**:
We include synthetic examples for evaluating 3D density and velocity fields. We use ScalarFlow synthetic dataset generation code [Eckert2019]. We generate five examples with different inflow source (randomized inflow area and density distribution) with higher viscosity and another five examples with lower viscosity. Since numerical viscosity is unavoidable, we simply use different simulation domain resolution to synthesize fluids with different viscosity levels. For the low viscosity group, we use 100x178x100. For the high viscosity group, we use 80x142x80.
We compare to the state-of-the-art neural fluid reconstruction method PINF [Chu2022] which has shown competitive results in 3D fluid fields reconstruction. Since the simulation groundtruth are up to a scale, we use scale-invariant RMSE to measure the performance. We only compute metrics where groundtruth density is greater than $0.1$ to rule out empty space (which is otherwise dominant) for clearer quantitative comparison. In particular, we consider volumetric density error (by querying density networks at the simulation grid points) to evaluate density prediction, and warp error (i.e., using velocity to advect density and comparing to GT density) to evaluate both density and velocity prediction. We also report novel view re-simulation results. From the table below, we see that ours outperforms PINF on both 3D fields and 2D rendering, similar to our main paper's observations. We show qualitative examples in Figure R1 and R2 in the PDF.
We show quantitative results in Table 1 below. We observe that ours outperforms PINF in all metrics. This is consistent with our observations on real data in our main paper.
Table 1: Evaluation on **synthetic** data.
|| Density error$\downarrow$| Warp error$\downarrow$ |PSNR$\uparrow$|SSIM$\uparrow$|LPIPS$\downarrow$|
|-|-|-|-|-|-|
| PINF | 4.95 | 4.88| 25.34 | 0.8641 | 0.1845
| Ours | **2.89**| **3.29** |**27.93** |**0.8643**|**0.1259**|
**[*qUcR*, *3dGa*, *4cV8*, *MRuw*] Evaluation on different viscosity levels**
In addition to overall evaluation above, we also include separate evalution on the different viscosity levels. We show the results in Table R1 in the PDF. We see that ours outperforms the SOTA method PINF on all metrics except for high-viscosity SSIM, likely due to that the high-viscosity velocity fields are dominated by laminar flows which PINF tends to recover.
**[*qUcR*, *3dGa*, *MRuw*] Comparison to NeuroFluid**:
NeuroFluid [Guan2022] is a recent method on learning fluid dynamics. We use NeuroFluid official code and their released pretrained transition model, and train it on ScalarFlow real dataset following their instructions. NeuroFluid assumes known initial state while in real data there is no groundtruth initial state; thus, we use the SOTA fluid reconstruction method PINF [Chu2022] to reconstruct the first frame as the initial state. We include a comparison to NeuroFluid in Table R2 and Figure R3 in the PDF.
From the results we observe that NeuroFluid does not produce meaningful novel view synthesis, since it does not target at real fluid reconstruction.
**[*3dGa*, *4cV8*, *MRuw*] Comparison to GlobTrans**:
GlobTrans [Franz2021] is a global grid optimization-based method specifically designed for fluid re-simulation and reconstruction. We use the official code release and show an comparison on Table R2 and Figure R4 in the PDF.
From the results we observe that ours allows much better reconstruction fidelity reflected by higher PSNR and SSIM, yet GlobTrans yields better LPIPS. We note that GlobTrans assumes known lighting and uses a more sophisticated shading model. In contrast, ours does not assume known lighting and thus can be more general.
**[*5YF8*, *MRuw*] Spatiotemporally varying appearance**:
We add an ablation on predicting spatiotemporally-varying color to account for spatially-varying lighting. We show the novel view video synthesis (which mainly evaluates appearances) results in Table R3 and Figure R5 in the PDF.
We observe that this does not lead to significant differences on the ScalarFlow real dataset. This may be due to that the fluid is homogeneous in material and that the capturing environment is controlled. For more complex scenes with complex lighting, using spatially-varying color may help further.
**Reference**:
[Chu2022] Physics informed neural fields for smoke reconstruction with sparse data. TOG2022
[Eckert2019] ScalarFlow: a large-scale volumetric data set of real-world scalar transport flows for computer animation and machine learning. TOG2019
[Guan2022] Neurofluid: Fluid dynamics grounding with particle-driven neural radiance fields. ICML2022
[Franz2021] Global transport for fluid reconstruction with learned self-supervision. CVPR2021
Pdf: /pdf/5ee1844851fbf84fc2645305df9db6ea58814829.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: In this paper, the authors propose a method (HyFluid) to infer the fluid density and velocities from multiview videos. To deal with the visual ambiguities of fluid, physics-losses are introduced to try to enforce physics plausible velocities. A neural velocity field and a vortex particle-based velocity are introduced to capture the irrotational energy and model residual turbulent velocity respectively. Th experiments show that the proposed method has the ability to recover the vortical flow details to some extent.
Strengths: The paper is overall well written and organized.
The proposed density loss and projection loss are well derived from the fluid mechanics.
The experiments show that the proposed method can perform better than the baselines.
Weaknesses: The authors claim that the method can give physics plausible estimation of fluid fields. The physics intuitions for the density loss and projection are clear according to the incompressible condition. However, I don't see obvious physics intuition for the laminar loss.
For the problem that visual appearance of fluids depends on lighting and fluid substance properties, there is no further exploration in this direction. The problem becomes more important when the fluid is liquid.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: - Does the MGPCG applied to every simulation step to project the velocity during the training? How is the runtime cost for computing the projected velocity?
- Why to encourage high-density regions to have non-zero velocity (line 137)? What is the physics intuition here?
- As the visual appearance of fluid highly depends on the rendering process, does the results very sensitive to the parameters of the renderer?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your time and comments! Please see our response below.
- **Physical intuition on laminar loss**:
Laminar flow does not manifest local density change, and thus inferring it from pure visual observations is challenging. We introduce this regularization term to account for the fact that even in constant-density fluid regions, there can still be laminar flow. Since this also depends on prior knowledge of the fluid to reconstruct, the laminar loss takes a flexible form: the hyper-parameter $\gamma$ models the prior belief of having laminar flow, e.g., when $\gamma=0$ it allows zero velocity even in high-density regions.
- **Further exploration on visual appearance**:
Following your suggestion, we allow learning spatially and temporally varying appearances. Please see the results and discussion in the "Spatially varying appearance" in the global response above.
- **Rendering parameter**:
We use volumetric rendering in our formulation (Eq. (3) in the main paper). The parameters include {near plane $t_n$, far plane $t_f$, number of samples for numerial integration $N$}, where $t_n$ and $t_f$ are determined by centering it at the scene center (which is the geometric center of camera principal rays) and empirically scale it. We find that as long as there are enough samples (in our case, more than 64), the numerical integration is stable and thus the results are not sensitive to these parameters.
- **On MGPCG**:
We use MGPCG in every training step to compute the projection loss. We implement MGPCG using Taichi [Hu2019] that allows GPU acceleration. We use three levels with resolution 128^3. On average, this costs around 100ms per step. Thanks to our efficient implementation, the whole training takes only ~9 hours on a A100 GPU, as noted in our supplementary material.
**Reference**:
[Hu2019] Hu, Y., Li, T. M., Anderson, L., Ragan-Kelley, J., & Durand, F. (2019). Taichi: a language for high-performance computation on spatially sparse data structures. ACM Transactions on Graphics (TOG), 38(6), 1-16.
---
Rebuttal 2:
Title: Happy to answer any further questions
Comment: Dear Reviewer 5YF8,
Thank you for reviewing our submission. We have posted our response per your suggestions and questions. We are happy to discuss with you and answer any further questions. We are looking forward to your feedback!
Best,
Authors of submission 1335 | null | null | null | null | null | null |
NuTrea: Neural Tree Search for Context-guided Multi-hop KGQA | Accept (poster) | Summary: This paper proposes a tree search-based GNN model that contains three consecutive steps: expansion, backup, and node ranking. It also proposes a relation frequency-inverse entity frequency node embedding method. The proposed model achieves new SOTA on WebQSP and CWQ datasets.
Strengths: 1. It proposes a tree-search-based GNN method with backup propagation to extend the KG context, which is intuitive and novel.
2. The RF-IEF node embedding, motivated by TF-IDF, is also a novel method and experimentally effective.
3. The proposed method achieves SOTA on WebQSP and CWQ datasets.
4. Extensive experiments including incomplete KG setting, ablation study, and advantage analysis of the proposed method.
Weaknesses: 1. The enhancement relative to the leading benchmark approach, ReaRev, is marginal, notably on the CWQ dataset. The progress is even more slight in the context of the incomplete KG scenario, as highlighted in Table 2.
2. In such circumstances, a more detailed comparative analysis with ReaRev might be required. ReaRev, which employs a variation of the breadth-first search for adaptive reasoning, also strives to expand the modeling of the KG context. How is the proposed tree search method better than ReaRev? More crucially, given that RF-IEF supplements the GNN model, what could be the outcome if RF-IEF were integrated with ReaRev? Would it still perform worse than NuTrea?
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: In the ablation study in Table 3, we see a significant performance drop when not using the Backup step. Under this setting, have the authors tried varying the layers of GNN? It would be interesting to see the performance-#layer curves with and without the backup step.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ### Q1. Performance gains are marginal on ComplexWebQuestions and incomplete KG experiments.
NuTrea’s 0.7 H@1 gain aligns with the standard improvements observed in recently published works. For instance, TERP (COLING 2022) reported a 0.6 H@1 gain compared to the previous best, Rigel (EMNLP 2021) reported a 0.1 H@1 gain, and TransferNet (EMNLP 2021) lags behind NSM (WSDM 2021) by 0.2 H@1. Therefore, when evaluated within the context of these existing works, our 0.7 point gain is far from marginal.
For incomplete KG experiments, on the other hand, it is important to note that incomplete KG experiments involve training with a considerably low portion of KG triplets (10%, 30%, and 50%). As a result, the performance level in such a setting reaches a state of high saturation, making any improvement even more noteworthy.
### Q2. How is the proposed tree search method better than ReaRev?
The key advantage of NuTrea from ReaRev is the Backup module that considers the subtree contents (i.e., the unreached regions), without explicitly visiting or updating child nodes in the subtree. This approach is crucial in distinguishing between correct and incorrect node choices, especially when the same sequence of relations (i.e., meta paths) can lead to both correct and incorrect nodes. This enhancement allows NuTrea to make more informed decisions during the tree search process based on broader subtree-level contexts, leading to overall performance improvement.
### Q3. If RF-IEF were integrated with ReaRev, would it still perform worse than NuTrea?
In fact, a plug-in experiment on ReaRev is already provided in Table 4 of Appendix I. We brought the table below for your convenience. In the table, we observed that ReaRev’s average performance improves with RF-IEF, but does not yet match that of NuTrea. Note, the values in parentheses are standard deviations across 5 runs.
| Models | ReaRev | NuTrea (Ours) |
|----------------|:----------:|:-------------:|
| without RF-IEF | 74.2 (1.4) | 75.5 (1.1) |
| with RF-IEF | 74.6 (0.8) | 76.3 (1.4) |
### Q4. The performance-#layer curves of NuTrea with and without the Backup step.
We have added the performance-#layer curve figure in the attached PDF file above in the "Author Rebuttal by Authors" thread. The figures report the WebQuestionsSP F1 score of our NuTrea with and without the Backup module. We observed that the model with Backup consistently outperforms its counterpart. Also, the model without Backup does generally improve with more GNN layers, but NuTrea without Backup reached the highest score with only 2 layers.
---
Rebuttal Comment 1.1:
Title: Question about the Backup step
Comment: Thanks for the rebuttal. But I'm still confused about the importance of the Backup step. In your answer to Q4, NuTrea w/o backup can achieve competitive performance when scaling to 5 layers, which means that simply adding the number of layers can bring performance gain, hindering the importance of the backup step. If the authors want to claim that adding layers will increase the latency, please provide the latencies of NuTrea w/ backup (2 layers) and NuTrea w/o backup (5 layers).
---
Reply to Comment 1.1.1:
Title: Additional Comment by the Authors
Comment: Absolutely! The following table compares the latency per training epoch of two NuTrea configurations. But please note that the latencies were measured in a new environment, and these values are not to be compared with the latency reported in Table 4 of our manuscript.
| Model Config. | Train Latency (sec/epoch) |
|---------------------------|:-------------------------:|
| 2-layer NuTrea w/ Backup (base) | 653.2 |
| 5-layer NuTrea w/o Backup | 1068.0 |
The "5-layer Nutrea without Backup" requires approximately 64% more training time than our original model, "2-layer NuTrea with Backup". Thus, we'd like to claim that Backup is a much more efficient method for aggregating depth information, compared to naively increasing the number of layers. | Summary: In this paper, a retrieval-based solution to the Multi-hop KGQA problem is proposed. The authors introduce a novel approach that incorporates two innovations: a GNN network with bidirectional message passing (Expansion and Backup) and a novel method for embedding knowledge graph vertices, inspired by TF-IDF.
The paper features numerous experiments and comparisons with existing baselines using well-known datasets such as WebQuestionsSP, ComplexWebQuestions, and Meta-QA, all of which rely on Freebase KG.
Furthermore, the authors provide an extensive analysis of the proposed technique, showing that the introduced backup method makes the maximum contribution to the improvement of the results.
Strengths: In this paper, the authors introduce a new method to the solution of the KGQA problem. The backup component of their solution stands out as particularly intriguing because of its maximum contribution to the improvement of the final results. The method is explained in a clear manner, with all essential formulas described qualitatively. Explanation of the inner working of the method is clear. NuTrea, the proposed approach, demonstrates results that are comparable to the State-of-the-Art (SOTA). Additionally, the authors have conducted qualitative comparisons with existing methods. Openly releasing the code will benefit the community.
Weaknesses: The authors did not compare their proposed method with papers that showed significantly better results, such as DECAF (DPR + FiD-3B) (Yu, et al., 2023) with a Hit@1 of 78.8. The reason for not doing so was not explained.
One aspect of the system utilizes graph vertex embeddings, and the authors introduce a new approach called RF-IEF. However, it is unclear why the authors did not compare the performance of NuTrea with other graph node embeddings like PyTorch-BigGraph, which may have yielded better results. Currently, it is difficult to evaluate the effectiveness of the proposed vertex embedding approach.
Technical Quality: 4 excellent
Clarity: 3 good
Questions for Authors: Experiments conducted on Incomplete KG demonstrate that the proposed approach performs notably better compared to others. However, it is not specified in the paper whether RF-IEF was re-trained separately for each Incomplete KG or if the original one was used. If the latter case is true, the experiment is unfair and should be recalculated.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 3 good
Contribution: 3 good
Limitations: NuTrea limits the search for answers to the vicinity of the original vertex, which restricts the depth of exploration within the graph. The initial depth of search is predetermined and additional heuristics are required.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ### Q1. Why is NuTrea not compared with other methods (e.g. DECAF) that have better results?
Among the various KGQA methods, the ones that outperform NuTrea belong to the “Semantic Parsing” category, where they leverage ground-truth logical queries (forms) during training (refer to section 2, Related Works). The logical queries are like ground-truth paths to the answer nodes, which can substantially enhance model performance through supervised training. On the contrary, NuTrea and its baselines belong to the "Information Retrieval" category, where direct supervision using ground truth logical queries is not utilized. To our knowledge, NuTrea demonstrates the best performance among the Information Retrieval approaches.
### Q2. Why is RF-IEF not compared with other graph node embeddings like PyTorch-BigGraph?
We compared RF-IEF with three commonly used node embeddings for knowledge graphs, namely TransE, DistMult, and ComplEx. These node embeddings were trained solely on the training set, similar to how we computed the RF-IEF only on the train data. As shown in the table below, the results demonstrate that NuTrea with RF-IEF significantly outperforms the other node embeddings. We conjecture that the main reason for this is that other node embeddings were not contrived for the inductive settings in the KGQA task. That is, node entities that were not seen during the embedding training steps had to be initialized to zero vectors, which would have inevitably damaged performance. Our RF-IEF (and also the previously used method from NSM [1]), on the other hand, is better suited for inductive settings.
In the case of PyTorch-BigGraph, the pretrained embeddings were labeled with different ID’s, preventing us from mapping the embeddings to the corresponding nodes.
| KG Node Embeddings | WQP H@1 | WQP F1 |
|:------------------:|:-------:|:------:|
| TransE | 74.6 | 70.0 |
| DistMult | 75.5 | 69.9 |
| ComplEx | 74.1 | 68.7 |
| NSM [1] | 76.8 | 71.5 |
| **RF-IEF** (ours) | **77.4** | **72.7** |
[1] Gaole He, Yunshi Lan, Jing Jiang, Wayne Xin Zhao, and Ji-Rong Wen. Improving multi-hop knowledge base question answering by learning intermediate supervision signals. In WSDM, 2021.
### Q3. Was RF-IEF retrained for each incomplete KG experiment?
Yes, the RF-IEF was recomputed for each incomplete KG experiment, and no pretrained parameters were used for RF-IEF. Thus, the KG incompleteness experiment in Table 2 of the manuscript is valid. | Summary: The paper presented two improvements to graph-based (KG) QA tasks: (1) Backup step, and (2) RF-IEF. Both proposed methods, according to Table 3, lead to significant improvement on multiple multi-hop QA tasks.
Strengths: The proposed idea of including a BackUp step, which resembles MTCS, is very interesting. Experimentally, it leads to the biggest difference (> 2 points) on the WQ dataset. To my knowledge, this is first paper which explicitly write out the backup term, while previous research relies on the GCN to capture contextual information from the neighbors.
The RF-IEF is also an interesting observation. The improvement, however, is not as significant. As mentioned by the authors, "Many KG entities (nodes) are pronouns that are not informative, and several KGQA datasets [11, 12] consist of encrypted entity names." Can you please run your model on a subset of questions with "pronouns" and/or encrypted entity names" to justify this claim?
Weaknesses: Overall, this is a good paper.
Please consider running experiments suggested above, i.e. with "pronouns" and/or encrypted entity names".
Other comments:
1. Line 105, it's a bit unfair to say "Following the standard protocol in KGQA [5], the subject entities in 𝑥𝑞 are given and assumed always to be mapped to a node in V". There are many work which doesn't make that assumption. It would be better to say this is an assumption you made and make sure that numbers in Table 1, 2, 3 all follow this assumption.
2. Can you please specify what are "edge type" in line 116?
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. Can you please specify your loss function? Is the model trained end-to-end? Any intermediate supervision?
2. Can you please explain why the ablation study is performed on WQ? Some questions in WQ are 1-hop questions. CWQ should have more multi-hop questions where BackUp should benefit more.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Please see weakness/questions above.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ### Q1. Can you run your model on a subset of questions with "pronouns" and/or encrypted entity names"?
All the questions in WebQuestionsSP and ComplexWebQuestions entail encrypted KG entities. Thus, the results in our paper already represent the experimental setting in question.
### Q2. Is "Following the standard protocol in KGQA, the subject entities in $x_q$ are given and assumed always to be mapped to a node in $\mathcal{V}$" the authors’ assumption?
This is not an assumption we made on our own. It is is a standard protocol of KGQA datasets, whose seed nodes (subject entities) are always provided along with the KG subgraph for each question. Thus, we can assume that this assumption holds as long as we evaluate our model on WebQuestionsSP, ComplexWebQuestions, or MetaQA.
### Q3. what is "edge type" in line 116?
In this work, “edge type” is equivalent to “relation type”. The edge type (relation type) defines the factual relationship between the two connected nodes (entities) on the KG.
### Q4. Can you specify your loss function? Is the model trained end-to-end? Any intermediate supervision?
Yes, the model is trained end-to-end without any intermediate supervision. As specified in Line 122~123, we use the KL divergence loss between the predicted score $\mathbf{\hat{y}} \in \mathbb{R}^{|\mathcal{V}\_q|}$ and the ground truth multi-hot vector $\mathbf{y} \in \mathbb{B}^{|\mathcal{V}\_q|}$. That is,
$\mathbf{\hat{y}} = \text{NuTrea}(\\{\mathbf{q}\_{\text{exp}}^{(i)}\\}\_{i=1}^N, \\{\mathbf{q}\_\text{bak}^{(j)}\\}\_{j=1}^M, \mathcal{V}\_s)$\
$\mathcal{L} = \text{KLD}(\mathbf{\hat{y}}, \mathbf{y}),$
where $\\{\mathbf{q}\_{\text{exp}}^{(i)}\\}\_{i=1}^N$ refers to the expansion instructions, $ \\{\mathbf{q}\_\text{bak}^{(j)}\\}\_{j=1}^M$ refers to the backup instructions, $\mathcal{V}\_s$ is the set of seed nodes, and $\text{KLD}(\cdot)$ is the KL divergence function.
### Q5. Ablation study on CWQ.
The ablation results on ComplexWebQuestions are provided in the table below.
| RF-IEF | Backup | CWQ H@1 | CWQ F1 |
|:------:|:------:|:-------:|:------:|
| v | v | 53.6 | 49.5 |
| v | | 52.7 | 48.7 |
| | v | 53.1 | 47.4 |
| | | 51.7 | 48.7 | | Summary: The proposed model adopts a message-passing scheme that probes the unreached subtree regions to boost node embeddings.The work also introduces the Relation Frequency–Inverse Entity Frequency (RF-IEF) node embedding that considers the global KG context to better characterize ambiguous KG nodes. The method shows some effectiveness over multiple datasets.
Strengths: - Good results on multiple datasets
- RF-IEF style node embeddings introduced. Might be useful in other works too.
- Nice Analysis section.
Weaknesses: - Model sizes/ compute not clearly discussed. Some information present in Table 4 but insufficient.
- How is the backup step different from a bigger expansion set in the previous works. This model doesn’t bring in any significant inductive bias over previous models.
1. The backup is like a DFS during the expansion BFS. Really depends on the dataset as to where you want to focus your compute on. One could either explore or exploit.
2. How would this model holdup in 4 or 5 hop QA? Would it not be extremely inefficient?
- The gains are somewhat marginal and might just be achieved by increasing parameters/expansion depth in previous models.
- RF-IEF is pitched as a generalized node embedding method but was not tested in other models. Although some nice analysis shown.
- Bad writing. Lot of forward references to future text - Eq 1, Eq 5.
- IG model descriptions very handwavy in paper and appendix.
Technical Quality: 2 fair
Clarity: 1 poor
Questions for Authors: - Are relation and relation type the same thing?
- Can you list situations where this backup not useful? Or is it always useful?
- Also address the weaknesses
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 1 poor
Contribution: 2 fair
Limitations: The authors discuss some limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ### Q1. Model size comparison is needed.
Below is the table containing the number of model parameters of NuTrea and ReaRev. Although NuTrea contains more parameters, it requires far less training hours than ReaRev. Also, increasing the model size of ReaRev does not necessarily enhance performance (See Q2).
| | Parameters | Training GPU Hours | | |
|:------:|:----------:|:------------------:|---|---|
| NuTrea |50 M| 2.9 Hrs | | |
| ReaRev|24 M| 4.3 Hrs | | |
### Q2. Can performance gain be achieved by increasing parameters or expansion depth in previous models?
No significant performance improvement was observed in the previous model, ReaRev, by simply increasing the expansion depth (i.e., the number of layers). Various expansion depths were tested, ranging from 2 to 5, but the H@1 and F1 performances did not improve much with deeper models. In fact, the original model with 2 layers performed better than deeper variants. Additionally, an attempt was made to enhance the ReaRev-5 model's width by increasing its dimension to 100, denoted as ReaRev-5X in the table below. However, even with these modifications, the performance did not match that of NuTrea. To ensure fairness in comparison, all models, including NuTrea, were trained again under identical environments and conditions.
| | # layers | H@1 | F1 | |
|-----------------|:--------:|:----:|:----:|---|
| ReaRev-2 (base) | 2 | 75.4 | 70.4 | |
| ReaRev-3| 3 | 74.0 | 69.9 | |
| ReaRev-4| 4| 73.8 | 69.5 | |
| ReaRev-5|5| 74.4 | 70.5 | |
| ReaRev-5X|5 | 74.9 | 70.2 | |
| NuTrea (ours)|2 | **77.3** | **72.2** | |
### Q3. How is the Backup step different from a bigger expansion set in the previous works?
Compared to previous message passing methods, our Backup’s max-pooling operator (Eq. (7) of manuscript) has its unique advantage in aggregating relevant information from the bag of relations within the multi-hop neighbors (i.e., the subtree). Backup’s effectiveness has been thoroughly demonstrated in our ablation studies (Table 3 of manuscript), providing 3.4 H@1 and 0.6 F1 improvements from the base setting. Furthermore, the experimental results in Q2 reveals that ReaRev with a greater expansion set does not necessarily improve performance.
### Q4. Will NuTrea be extremely inefficient in 4 or 5 hop QA?
Each NuTrea layer consists of three steps: (1) Expansion, (2) Backup, and (3) Node ranking. As the model complexity scales linearly with the number of layers (or hops), similar to GNNs, handling 4 or 5-hop QA settings by adding a few more NuTrea layers will not impact efficiency significantly. NuTrea remains sufficiently scalable for such scenarios.
### Q5. RF-IEF was not tested in other models.
In fact, a plug-in experiment on ReaRev is already provided in Table 4 of Appendix I. We brought the table below for your convenience. We observed that ReaRev’s average performance improves with RF-IEF. Note, the values in parentheses are standard deviations across 5 runs.
| Models | ReaRev | NuTrea (Ours) |
|----------------|:----------:|:-------------:|
| without RF-IEF | 74.2 (1.4) | 75.5 (1.1) |
| with RF-IEF| 74.6 (0.8) | 76.3 (1.4) |
### Q6. Bad writing - Lots of forward references to future text.
We’ll polish our writing and reduce forward references in our final version. Thank you.
### Q7. IG model descriptions are very hand-wavy.
The Instruction Generator (IG) is a commonly used text processing module used in many previous KGQA works, including NSM and ReaRev, and its details have already been described in Appendix B. To further illustrate its details, an IG takes the natural language question $x_q$ as input to output question representations $\\{\mathbf{q}_\text{exp}^{(i)}\\}\_{i=1}^N$ as
$\\{\mathbf{q}\_\text{exp}^{(i)}\\}\_{i=1}^N = \text{IG}_\text{exp}(x_q).$
In the IG function, a tokenizer converts input $x_q$ to tokens $\\{\mathbf{x}\_t\\}\_{t=1}^T$ that are used to retrieve the sentence embedding $\mathbf{q}_\text{LM}$ with a language model $\text{LM}(\cdot)$ (e.g., SentenceBERT) as
$\mathbf{q}\_\text{LM} = \text{LM}(\\{\mathbf{x}\_t\\}\_{t=1}^T).$
To deterministically sample a sequence of sentence representations, a quasi-monte carlo sampling (or non-probability sampling) approach is adopted. $\mathbf{q}_\text{LM}$ is first used to compute attention weights $a_t^{(i)}$ as
$\mathbf{q}^{(i)} = \boldsymbol{W}^{(i)} (\mathbf{q}^{(i-1)} || \mathbf{q}\_\text{LM} || \mathbf{q}\_\text{LM} - \mathbf{q}^{(i-1)} || \mathbf{q}\_\text{LM} \odot \mathbf{q}^{(i-1)})$\
$a_t^{(i)} = \text{Softmax}(\boldsymbol{W}_a (\mathbf{q}^{(i)} \odot \mathbf{x}_t)),$
where $i \in [1,N]$, and $\mathbf{q}^{(0)}$ is a zero vector. Also, $||$ indicates the concatenation operator, and $\boldsymbol{W}\_a \in \mathbb{R}^{D\times D}$, $\boldsymbol{W}^{(i)} \in \mathbb{R}^{D \times 4D}$ are learnable matrices. Finally, each question representation $\mathbf{q}\_\text{exp}^{(i)}$ is computed as
$\mathbf{q}\_\text{exp}^{(i)} = \sum_t a_t^{(i)} \mathbf{x}_t.$
The same process is repeated to compute the Backup instructions. We will add this enhanced explanation in our final version.
### Q8. Are relation and relation type the same thing?
The term “relation” refers to the edge or link between two entities in a KG, representing a factual connection between them. On the other hand, “relation type” refers to the type or category of the edge (link).
### Q9. Can you list situations where Backup is not useful? Or is it always useful?
Backup is less useful when the global context is less important. Particularly, if the question does not incorporate any conditions or constraints that require considering the broader context of the graph, the Backup module might become redundant. Additionally, if the answer nodes can be easily reached within a few hops from the starting node, the Expansion module alone might be enough to find the correct answer node.
---
Rebuttal Comment 1.1:
Comment: - Most Answers are convincing.
- RFIEF as a generalized method: Marginal gain for reared. Not convinced.
- Rearev at higher depth doesn't perform well. This likely means that gains in nutrea are from parameter increase and not from the actual backdrop technique.
No Rating change
---
Reply to Comment 1.1.1:
Title: Additional Comment by the Authors
Comment: We appreciate your time in going over our responses! We are glad that you found most of our answers convincing. To resolve your remaining concerns, we have added a few more lines below.
With regard to your concerns for marginal gains, it is worth noting that RF-IEF was seamlessly integrated into ReaRev, **maintaining the original model hyperparameters to ensure a fair comparison with the base model**. However, ReaRev + RF-IEF may favor a different model architecture (e.g. fewer layers or iterations), and selecting a different hyperparameter set could have further improved its performances.
Furthermore, we believe the underperformance of the deeper ReaRev models does not necessarily imply that NuTrea took advantage of the larger parameter size. **Our experiment in Q2, in fact, demonstrates that adding more ReaRev layers (i.e., hops) cannot substitute for our NuTrea’s Backup module.** This actually highlights the effectiveness of our approach! Notably, the ReaRev-5X model in the table was trained with the same entity embedding dimension as NuTrea. Even with the same entity embedding size, NuTrea outperformed ReaRev-5X by a significant scale of 2.4 H@1 and 2.0 F1 scores. | Rebuttal 1:
Rebuttal: We thank all five reviewers for their strong support and constructive comments on our work. We are glad that the reviewers found our work promising and interesting. Our responses for all the reviewers' questions are provided below. Please go over our responses and let us know if there are issues that are yet unresolved. Attached is a PDF file containing the figure that answers Q4 of reviewer 5UYV.
Pdf: /pdf/e4f837fab287b5e0035287186a1daa07e8d05952.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: This paper proposed NuTrea, a graph neural network (GNN) model for Multi-hop KGQA. NuTrea considers broader KG contexts using a tree search scheme to find paths to answer nodes. It uses message-passing layers to explicitly consider question constraints involving bi-directional information (or future context). Also, this paper proposed an interesting node embedding method RF-IEF to characterize KG node entities.
Strengths: - The idea of predicting entities by bi-directional information is reasonable.
- The Relation Frequency–Inverse Entity Frequency metric brined new insight into initializing embeddings of graph nodes.
- The experiments demonstrate the effectiveness of the proposed method.
- The paper is well-written.
Weaknesses: - The core technical design of this work involves utilizing a backup mechanism that allows models to leverage bidirectional information (or global context), enabling them to distinguish entities with similar paths. Some embedding-based methods(e.g., EmbedKGQA and TERP) and GNN-based methods (e.g., SQALER and GraftNet) can also utilize bidirectional information, somewhat sharing a similar high-level idea. The unique superiority compared with these methods and the connections to them deserve more discussions and analyses. Otherwise, it is unclear how to position this work in the current research background, making the technical contribution seem incremental.
- Although the main results and ablation study show performance superiority, most further analyses are about qualitative evaluation. Figure 2 is more about a concept-illustration example instead of an expressiveness analysis. Putting it into other sections (e.g., introduction or method) may be better. Related to the first issue, this work needs more experimental design and in-depth analysis to highlight the unique merits of the proposed methods regarding leveraging the global context. For example, I am curious to see if this method can outperform other models when both the correct and incorrect entities have the same path.
After reading the author’s rebuttal, most of my concerns are addressed. I have no objection to accepting this paper if the authors include more discussion about related works exploiting bidirectional information.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: See weaknesses.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: I have no concerns about the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ### Q1. What makes NuTrea different from previous embedding-based and GNN-based methods?
While previous methods (e.g., EmbedKGQA, GraftNet, SQALER) _simultaneously_ update embeddings of all the KG nodes, our NuTrea gradually expands the subgraph and _sequentially_ updates nodes from the seed node towards the answer node, similar to a path-search algorithm. The superiority of this approach has been demonstrated in several recent works, such as NSM (WSDM 2021), TERP (COLING 2021) and ReaRev (EMNLP-Findings 2022). We believe that paths provide a more effective representation of question semantics and are better suited to process the KG in alignment with the given question. Our work builds on this recent line of research by incorporating bi-directional information for enhanced path searching.
### Q2. Need for analysis to highlight NuTrea’s merit in leveraging global context. For example, can this method outperform other models when both the correct and incorrect entities have the same path?
As suggested, we analyzed the ratio of questions that have a KG path (i.e., meta path) that leads to both the correct and incorrect node choices. In the WebQuestionsSP dataset that utilizes the Freebase KG [1], approximately 72% of the questions entailed such KG paths. Thus, outperforming other models on this dataset naturally indicates NuTrea’s unique merit in dealing with circumstances where global context should play a critical role. On the other hand, the MetaQA (1, 2, and 3-hop) dataset that adopts the WikiMovies KB [2] only contained 34% of the questions with such a KG path, which possibly explains the relatively smaller gain in performance on this dataset.
[1] Kurt Bollacker, Colin Evans, Praveen Paritosh, Tim Sturge, and Jamie Taylor. Freebase: a collaboratively created graph database for structuring human knowledge. In ACM SIGMOD, 2008.\
[2] Alexander Miller, Adam Fisch, Jesse Dodge, Amir-Hossein Karimi, Antoine Bordes, and Jason Weston. Key-value memory networks for directly reading documents. In EMNLP, 2016
### Q3. Putting Figure 2 in the Introduction or Method section may be better.
We will reflect your suggestion in our final version. Thank you.
---
Rebuttal Comment 1.1:
Comment: - SQALER is more like a path-searching algorithm, while TERP is not. I understand that this paper proposes a new method by incorporating bi-directional information to enhance path searching. Since the bi-directional perspective is not new, what I would like to see is more analyses and comparisons with bi-directional methods and what makes NuTrea better.
- The response to Q2 is reasonable.
---
Reply to Comment 1.1.1:
Title: Additional Comment by the Authors
Comment: Thank you for your response. The following is what we understand about the most recent works on multi-hop KGQA.
- TERP
TERP proposes to **align** the KG paths with the natural language question via the rotate-and-scale framework. Here, the question’s textual information is encoded with an LM-based Question Encoder, and the KG relation paths are encoded using both the textual information of the relation and the KG embeddings.
So yes, you are absolutely correct in that TERP is more of an embedding-based approach than a path-searching method. We just wanted to emphasize that aligning (or searching) paths with respect to the question semantics is key to handling the multi-hop KGQA problem.
- SQALER
SQALER also tries to **align** the question with the KG, by first extracting a coalesced relational representation from it. Then, an “edge-level” model (e.g. GCN) is applied to refine the solution on the original KG. This provides an efficient means to decouple logical reasoning and multi-hop reasoning, thereby creating a scalable framework for multi-hop KGQA.
- ReaRev
ReaRev is more like a path-searching method that tries to **search** its way from the seed node to the answer nodes while considering the question semantics. That is, each reasoning step (or GNN layer) is a 1-hop expansion of the search area, similar to a BFS search on the KG.
Our NuTrea extends this path-**search** framework of ReaRev, by enhancing the algorithm with bi-directional information. To our knowledge, this is the **first** attempt in incorporating bi-directional information into KG path-search. Our approach enriches the context the model can leverage in searching its way to the answer nodes, via our efficient module dubbed “Backup”.
In order to demonstrate the effectiveness of Backup, we compared NuTrea with deeper versions of ReaRev, which, in theory, should be able to cover the bi-directional information via more search hops (i.e., layers).
| | # layers | H@1 | F1 |
|-----------------|:--------:|:----:|:----:|
| ReaRev-2 (base) | 2 | 75.4 | 70.4 |
| ReaRev-3 | 3 | 74.0 | 69.9 |
| ReaRev-4 | 4 | 73.8 | 69.5 |
| ReaRev-5 | 5 | 74.4 | 70.5 |
| NuTrea (ours) | 2 | **77.3** | **72.2** |
In the table, various depths were tested, ranging from 2 to 5. However, the model performances did not improve with deeper models. This, in retrospect, indicates that our NuTrea’s Backup module is both effective and efficient in handling the bi-directional information for path searching. We will add these discussions in our final version. | null | null | null | null | null | null |
Subclass-Dominant Label Noise: A Counterexample for the Success of Early Stopping | Accept (poster) | Summary: This paper proposes a new type of noisy labels called subclass-dominant noisy labels and introduces an algorithm called NoiseCluster based on this noisy label modeling. In the experimental section, the authors demonstrate the superiority of NoiseCluster over previous approaches in the subclass-dominant noisy label cases. Additionally, NoiseCluster exhibits better performance than previous algorithms on the real-world Clothing1M dataset.
Strengths: - Introduces a new type of noisy label modeling.
- Well-written paper.
- Conducts extensive experiments.
Weaknesses: - The justification for the proposed subclass-dominant noisy label model is lacking. It is unclear whether this type of noisy labels frequently occurs in the real-world. For instance, it would be helpful to investigate if the Clothing1M dataset exhibits similar phenomena.
- The analysis should include the performance of NoiseCluster on other types of noisy label models such as symmetric, instance, or asymmetric. This would provide a more comprehensive evaluation of the algorithm.
- Further analysis on real-world datasets is necessary. While the authors evaluate NoiseCluster on the Clothing1M dataset, it would be beneficial to assess its performance on other real-world benchmarks such as WebVision, Food101N, and similar datasets.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: -
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: The limitations of this paper are summarized in the "Question" and "Weakness" sections
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: >** **
W1: The justification for the proposed subclass-dominant noisy label model is lacking ... investigate if the Clothing1M dataset exhibits similar phenomena.
A1: Thank you for your insightful feedback. We highly agree with the justifying the presence of SDN and the noise clustering phenomena is pivotal for our study. For more existence of SDN, we response it in General Question 2. Detailed results are available in the attached one-page PDF.
>** **
W2: The analysis should include the performance of NoiseCluster on other types of noisy label models such as symmetric, instance, or asymmetric. This would provide a more comprehensive evaluation of the algorithm.
A2: In this study, our primary focus is on examining a specific type of real-world label noise: Subclass Dominant label Noise (SDN). NoiseCluster has been specifically designed to tackle SDN.
For other types of synthetic label noise, such as symmetric, instance, or asymmetric, we response it in General Question 1.
>** **
W3: Further analysis on real-world datasets is necessary ... assess its performance on other real-world benchmarks such as WebVision, Food101N, and similar datasets.
A3: We agree that evaluating our method, NoiseCluster, on additional real-world benchmarks would strengthen our findings and provide a more comprehensive understanding of SDN. Following your suggestion, we delve into SDN within the WebVision dataset. We find that SDN is prevalent across classes within the mini WebVision dataset. Moreover, clustering effects, specific to SDN, are distinctly evident therein. We pinpointed three unique SDN cases in WebVision and have provided visualizations of their associated images. These can be viewed in the attached one-page PDF.
---
Rebuttal Comment 1.1:
Title: Answer for official comment
Comment: Thank you for your detailed responses. I have read the responses to my question, and I will maintain the score regarding this. (rating 5)
---
Reply to Comment 1.1.1:
Comment: Thank you for your invalue suggestions. Due to the page limitions during rebuttal, we will provide more evidences for the existence of SDN and its negative imparts into the upcoming version. | Summary: This paper investigates the impact of early stopping on models trained with noisy labels and introduces a new type of label noise called subclass-dominant label noise (SDN). The experiments reveal that later stopping during training can better capture the high-level semantics of noisy examples. Building upon this finding, the authors propose a novel approach named NoiseCluster, which utilizes the geometric structures of long-trained representations to detect and correct SDN. The experimental results demonstrate that NoiseCluster outperforms several label noise robust methods on both SDN and Clothing-1M datasets.
Strengths: - This paper introduces a novel type of label noise called subclass-dominant label noise (SDN), providing a well-motivated and insightful perspective on the limitations of early stopping in the presence of SDN. By demonstrating the inapplicability of early stopping for training with this type of label noise, the authors shed light on the challenges faced by early stopping in handling real world label noise.
- Experimental results showcase the superior performance of the proposed method on both synthetic and real-world datasets, highlighting its effectiveness in addressing the issues posed by SDN.
Weaknesses: - The paper would benefit from a comparison with instance-dependent label noise robust methods, which could provide a comprehensive evaluation of the proposed approach.
- The evaluation of methods in this study is limited to a single synthetic dataset and one real-world dataset. Assessing the proposed approach on a wider range of datasets can further demonstrate the generalizability and robustness of their method in various practical scenarios.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Could the authors provide further clarification on the differences between SDN and IDN? Moreover, given that there have been existing studies on IDN, what is the significance of specifically investigating and studying SDN?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: please refer to the Weaknesses
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: >** **
Q1: The paper would benefit from a comparison with instance-dependent label noise robust methods, which could provide a comprehensive evaluation of the proposed approach.
A1: Thank you for your insightful suggestion. In this work, we have included 12 baselines, many of which have been demonstrated to perform well on IDN. For example, PTD-R-V and BLTM-V are two methods specifically designed to address IDN, and both DivideMix and PES(semi) show strong performance on the CIFAR-N dataset, which contains human annotations. Despite their success in these settings, all of these baselines encounter serious difficulties when dealing with SDN, which validates the effectiveness of NoiseCluster in handling SDN.
>** **
Q2: Assessing the proposed approach on a wider range of datasets can further demonstrate the generalizability and robustness of their method in various practical scenarios.
A2: We highly agree with the importance of evaluating our method on a broader range of real-world datasets. In pursuit of this, we undertook additional experiments on mini WebVision and revisited our experiments on Clothing1M to confirm the presence of SDN and the noise clustering phenomenon. Our observations indicate that SDN is widespread across classes in both datasets. Moreover, clustering effects inherent to SDN are clearly observable, affirming the efficacy of our proposed method in detecting SDN. We've pinpointed six unique SDN cases from WebVision and Clothing1M and closely examined their corresponding images. These findings are detailed in the attached one-page PDF.
For other types of synthetic label noise, such as symmetric or asymmetric, we response it in General Question 1.
>** **
Q3: Could the authors provide further clarification on the differences between SDN and IDN? Moreover, given that there have been existing studies on IDN, what is the significance of specifically investigating and studying SDN?
A3: We truly appreciate this insightful question. Since Instance-Dependent Noise (IDN) is still an area under active research, multiple explanations for IDN may exist. Below, we offer our perspective.
IDN is derived from a noise modelling perspective and its definition is exceptionally flexible, encompassing all types of label noise, including SDN. This comprehensive approach enables the pursuit of a universal solution that can address all forms of label noise. However, this broad approach also presents challenges, such as defining a generation method that accurately mimics all real-world label noise types, or identifying typical real-world datasets that include all forms of label noise.
On the other hand, SDN is a distinct type of label noise we have identified empirically. We came across specific mislabeled examples resistant to early stopping yet exhibiting recurring patterns. Based on these findings, we deduced the underlying causes, articulated the concept of SDN, and embarked on thorough experiments to explore its properties, all of which are elaborated upon in our paper. The value of our SDN study stems from its emulation of a prevalent type of real-world label noise, notably observed after early stopping.
Overall, since current methods may struggle with SDN, we believe specifying SDN will aid in the study of IDN. | Summary: In this work, the authors present a new type of label noise: subclass-dominant label noise. The authors show that the model trained over time can better capture such label noise in the feature space, and based on this idea, a clustering then correcting pseudo labels algorithm, NoiseCluster, is designed to identify and correct SDN. The proposed method can be combined with current label noise and semi-supervised learning methods, and is validated on the authors' constructed dataset, cifar20-SDN, and another real-world dataset, Clothing 1M.
Strengths: - The paper is well-organized. The Introduction and related work sections are clear and well introduces the studied problem.
- According to the proposed subclass-dominant label noise characteristics, the authors constructed a corresponding dataset and proposed a methodology addressing it. The proposed approach is heuristically reasonable and applicable.
Weaknesses: - My main concern is the existence of proposed subclass-dominant label noise (SDN) in the real world scenarios.
- The proposed method works well on the constructed dataset, CIFAR20-SDN. However, the performance improvement on real-world dataset, Clothing 1M is marginal. This makes the existence of SDN questionable.
- Lack of some analysis of clusting effects and results in experiments. Please refer to the Questions.
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: - It will be more clear to describe the class and sub-class relations briefly, or giving a concrete example of CIFAR20-SDN in the main paper.
- Clustering Result: More analysis of clusting effects and results could make the proposed SDn and method more convincing. For example, on the CIFAR20-SDN dataset, does the clustering result consist with the manually created SDN noise? And what about the real-world dataset Clothing 1M?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 2 fair
Limitations: The authors have mentioned their limitations in the conclusion section.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: >** **
Q1: My main concern is the existence of proposed subclass-dominant label noise (SDN) in the real world scenarios.}
A1: Your concern is important. The existence of SDN in real-world scenarios is fundamental to the study of SDN. Since this question is so important, we reponse it in General Question 2.
After the publication of this paper, we plan to create a website to showcase the collections of SDN we've identified. The site will also offer an upload feature, allowing other users to share SDN instances they've found. This initiative aims to address this issue in real-world scenarios. Additionally, on the website, we will acknowledge all the reviewers for their invaluable suggestions.
>** **
Q2: The proposed method works well on the constructed dataset, CIFAR20-SDN. However, the performance improvement on real-world dataset, Clothing1M is marginal. This makes the existence of SDN questionable.
A2: We concede that the improvements on Clothing1M are not as much as those on CIFAR20-SDN, given the complexity of Clothing1M. However, when compared with SOTA methods such as ELR+ and DivideMix, which both leverage dual networks, MixUp, and semi-supervised techniques (techniques known to enhance performance in clean settings). The improvements made by NoiseCluster are remarkable, considering it doesn't rely on these supplemental techniques. Furthermore, a 0.7\% rise is also considerable, constituting over 10\% of the total 6.3\% improvement in comparison to the CE result. Given the straightforward approach of NoiseCluster, it's evident that this enhancement is purely due to its adept handling of SDN.
>** **
Q3: It will be more clear to describe the class and sub-class relations briefly, or giving a concrete example of CIFAR20-SDN in the main paper.
A3: We agree that providing a concrete example of class and sub-class relations, particularly in the context of the CIFAR20-SDN dataset, would help in understanding the concept of Subclass-Dominant Label Noise (SDN). Here's a potential addition to the manuscript:
The first category in CIFAR20 is termed "aquatic mammals," which is subdivided into five sub-classes: "beaver", "dolphin", "otter", "seal", and "whale". To introduce SDN, we randomly flip a significant proportion of labels within the "whale" subclass to the subsequent category, "fish". Subsequently, "whales" labeled as "fish" are considered as SDN.
We hope this example makes the concept of class and subclass relations, as well as the Subclass-Dominant Label Noise, clearer. We will incorporate this explanation into the revised manuscript to facilitate better understanding for readers.
>** **
Q4: Clustering Result: More analysis of clusting effects and results could make the proposed SDn and method more convincing. For example, on the CIFAR20-SDN dataset, does the clustering result consist with the manually created SDN noise? And what about the real-world dataset Clothing 1M?
A4: Thank you for your insightful suggestions. We agree that a more thorough analysis of the clustering effects and results could strengthen our claims and make our proposed SDN and method more convincing.
To this end, we have added t-SNE images from CIFAR20-SDN, Clothing1M, and WebVision datasets in the attached one-page PDF. A comparison of t-SNE images from these datasets with the manually created CIFAR-10 SDN in Figure 1 in the main paper clearly demonstrates a consistent clustering phenomenon across all four datasets.
---
Rebuttal Comment 1.1:
Title: Thank you for the detailed response
Comment: I appreciate the author's detailed response. The additional figures and explanation of the datasets could largely justify the existence of SDN in some real-world datasets, which addresses my main concerns to some extent. I've also read the other reviewers' comments and the authors' rebuttal and decided to raise my score from 4 to 5.
---
Reply to Comment 1.1.1:
Comment: We deeply appreciate the time and effort you've dedicated to providing us with valuable insights and feedback regarding the presence of SDN in real-world datasets. Such contributions significantly bolster the integrity and solidity of our work. With gratitude, we are committed to integrating these invaluable suggestions into the upcoming version.
---
Rebuttal 2:
Comment: We sincerely appreciate your invaluable insights into the existence of the proposed SDN in real-world scenarios, as well as the feedback on consistency experiments using real-world datasets and the manually created SDN dataset. We have conducted additional experiments and put the results in the one-page PDF. If there are any further areas of our research that need clarification or if there are additional queries you might have, we are more than willing to provide further explanations. | Summary: In the paper, the authors uncover the phenomenon that mislabeled examples are quickly learned during the initial stages of training when Subclass-Dominant Label Noise (SDN) is present. This behavior hinders the effectiveness of early stopping-based robust methods. To address this issue, the authors propose an approach that does not rely on early stopping. The method involves identifying mislabeled examples by clustering the penultimate layer features and correcting them by assigning them labels of the closest class. Experiments are conducted to demonstrate the superiority of their method compared to existing approaches.
Strengths: 1. The finding of the failure of early stopping under SDN is intriguing with practical implications, as SDN can be a common occurrence in practice.
2. The proposed method has significant improvements over existing methods under SDN.
Weaknesses: Overall, while I appreciate the first part of the paper that discusses the failure of early stopping under SDN, I have some concerns and suggestions regarding the other parts:
1. It would be beneficial to delve deeper into the phenomenon of early stopping’s failure under SDN, e.g., exploring why subclass dominance leads to wrong labels being learned quickly.
2. The advantage of long-trained representations may not be surprising, as training progress can naturally lead to more distinguishable clusters. It seems that this property is not specific to SDN.
3. Regarding the combination with SSL, it is unclear why there would be features that are not assigned any group index. Are these features the same as $U^c_{k-1}$ mentioned in section 4.1?
4. The Noisecluster+ method, where labels are not corrected but instead discarded (as those examples are treated as unlabeled), achieves better performance than the class-correction based method. Does this mean label correction is not as effective?
5. I am curious about the percentage of examples whose labels were successfully corrected by the label-correction method.
6. In Table 5, is the method without label correction simply vanilla training? If that is the case, vanilla training already accomplishes the majority of the performance, and the performance improvements attributed to label correction appear relatively modest (1%). So the proposed method does not contribute much.
7. ELR should also be included in Table 2, as it does not require SSL (only ELR+ does).
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: In addition to the questions raised in Weaknesses, I have the following additional questions:
1. Regarding Cloth1M, why was only one epoch used for training? Furthermore, what is the rationale behind training on 95% of the data first and then applying SDN with only 5%? Currently it is unclear why these modifications are made.
2. The method doesn't appear to be specifically designed for SDN alone, as identifying mislabeled examples and the label correction part make sense for general noise as well. How does the proposed method perform under normal noise?
3. Since the authors mentioned incorporating self-supervised learning, it would be valuable to discuss self-supervised pretraining, which has been shown to enhance robust methods [1][2][3][4]. As features are initially learned without labels, starting from a pretrained model might lower the chance of mislabeled examples in SDN being learned early. Therefore, I wonder if early stopping could potentially be effective with this pretraining even under SDN.
[1] Hendrycks, Dan, et al. "Using self-supervised learning can improve model robustness and uncertainty." Advances in neural information processing systems 32 (2019).
[2] Zheltonozhskii, Evgenii, et al. "Contrast to divide: Self-supervised pre-training for learning with noisy labels." Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision. 2022.
[3] Xue, Yihao, Kyle Whitecross, and Baharan Mirzasoleiman. "Investigating why contrastive learning benefits robustness against label noise." International Conference on Machine Learning. PMLR, 2022.
[4] Ghosh, Aritra, and Andrew Lan. "Contrastive learning improves model robustness under label noise." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2021.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 3 good
Limitations: The authors have discussed limitations in the paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: >** **
Q1: It would be beneficial to delve deeper ... dominance leads to wrong labels being learned quickly.
A1: Thanks for the insightful question. To explain why SDN is learned quickly, we must recall why a machine learning model can learn a generalized function and why the early stopping phenomenon exists in learning with noisy labels. In real life, ``class" denotes a group of objects with similar characteristics, defined by a set of rules. Deep networks, designed to identify these characteristics, facilitate the classification of real-world classes. In supervised learning, the classification task involves learning a function from prior labeled samples that can accurately predict unobserved samples, in accordance with the characteristics.
This viewpoint sheds light on the early stopping phenomenon. A network can quickly learn a generalized function to recognize correctly labeled samples within a noisy dataset, as both observed and unobserved examples with clean labels share the same characteristics. On the other hand, the network encounters difficulty in finding a generalized function for mislabeled samples, especially if their generation method (e.g., random selection) fails to align with any real-world rule. As a result, the network is forced to memorize mislabeled samples individually to distinguish them from correctly labeled ones. This leads to the earlier learning of correctly labeled samples.
However, real life provides various ways to organize objects, such as by status, color, shape, etc. For instance, in the experiments shown in Figure 1, examples within the airplane class can be divided into flying or landed categories based on their status. We flip the labels of landed airplanes to automobiles. The network can then quickly learn a generalized function to distinguish landed airplanes from flying ones by recognizing special features in landed airplanes, such as wheels or land. Overall, this may explain why SDN is learned quickly.
Additionally, we have visualized SDN examples from both Clothing1M and WebVision datasets and included them in the attached one-page PDF. We believe these images will provide more hints on why SDN can be learned rapidly.
>** **
Q2: The advantage of long-trained representations may not be surprising, as training progress can naturally lead to more distinguishable clusters. It seems that this property is not specific to SDN.
A2: (1) Thank you for your observation. In the self-supervised learning community, it has been widely recognized that long-trained representations become more distinguishable in alignment with their inherent features. In contrast, supervised learning typically sees representations of examples forming more distinct clusters based on their labels. To the best of my knowledge, no existing paper in the field of learning with noisy labels has discussed that long-trained representations will gravitate towards their inherent features (e.g., images) rather than conforming to their (noisy) labels, when using supervised learning.
(2) Yes, this property is not unique to SDN. We have demonstrated that symmetric label noise exhibits a similar phenomenon, as shown in Figure 3(d). However, the representations resulting from symmetric label noise are significantly more complex and cannot be effectively clustered in a 2-dimensional space. Research methods, including TopoFilter and ELR, have highlighted the vital importance of the early learning phase in preventing the degradation of representations trained over extended periods. Thus, long-trained representations may not be suitable for all types of label noise.
>** **
Q3: Regarding the combination with SSL, it is unclear why there would be features that are not assigned any group index. Are these features the same as mentioned in section 4.1?
A3: Thank you for bringing up this point. There are some peripheral points in t-SNE images, which are too sparse to constitute a new cluster and are distant from established groups. Then, these points of the features are not in any group, and are regarded as unlabeled examples for SSL. For a more comprehensive understanding of DBSCAN Noise, we recommend referring to the original DBSCAN paper.
>** **
Q4: The Noisecluster+ method, where labels are not corrected ... label correction is not as effective?
A4: There is a potential confusion. Label correction is used in both NoiseCluster and NoiseCluster+. The term "NoiseCluster+" denotes NoiseCluster plus MixMatch.
>** **
Q5: I am curious about the percentage of examples whose labels were successfully corrected by the label-correction method.
A5: The mean and standard deviation computed over five runs are presented in the below table.
|Method| SDN-12 | SDN-16 | SDN-18 | SDN-20 |
|:-----|:----:|:----:|:----:|:----:|
|confident examples | 41536 (361) | 41484 (455) | 41812 (501) | 42245 (129) |
|unconfident examples (DBSCAN Noise) | 3464 (361) | 3516 (455) | 3188 (501) | 2755 (129) |
|total corrected examples | 4774 (761) | 4914 (544) | 5958 (562) | 6112 (609) |
|successful corrected examples | 4153 (446) | 4422 (763) | 5040 (800) | 3272 (762) |
|successful correct rate (\%) | 87.50 (4.47) | 89.52 (5.65) | 84.25 (6.16) | 53.17 (8.47) |
>** **
Responses to questions Q6-Q10 can be found in General Questions due to character limitations.
---
Rebuttal 2:
Comment: We sincerely appreciate the time and effort you've invested in thoroughly reviewing our paper, providing us with profound insights and constructive feedback. In response to the inquiries, we have addressed the following points:
1. Offered an in-depth explanation regarding the phenomenon of the early stopping's failure.
2. Elucidated the significance of long-trained representations discovered in label noise.
3. Carried out additional experiments to measure the successful correct rate and provided clarity on the techniques used for label correction, as used in both NoiseCluster and NoiseCluster+.
4. Delineated the reasons for specifical designing NoiseCluster to SDN.
5. Introduced a discussion on the challenges of SDN in Self-supervised learning.
We also wish to emphasize the main contributions of this paper:
1. We first found a new kind of prevalent real-world label noise and created a dataset, CIFAR20-SDN, to successfully mimic its behavior (shown in the one-page PDF).
2. We first explicitly demonstrate the limitations of early stopping, which is widely used in SOTA methods.
3. We propose an innovative yet straightforward approach to tackle SDN. Furthermore, by integrating existing early stopping-based methods, NoiseCluster successfully detect and correct SDN in large, real-world noisy datasets.
We will be integrating these modifications into the upcoming version. As the discussion period nears its end, we wish to make certain that all facets of our research are clear and well-understood. Are there any other areas within our study that you believe need further detail or clarification?
---
Rebuttal Comment 2.1:
Title: Thanks for your response
Comment: Thank you for clarifying, and I apologize for my late reply. The responses from the authors have addressed most of my concerns. As a result, I've raised my score. | Rebuttal 1:
Rebuttal: >** **
General Questions
>** **
Q1: What is NoiseCluster's performance across different noisy label models, like symmetric, instance, and asymmetric?
A1: NoiseCluster has been specifically designed to tackle SDN, utilizing our proposed late stopping strategy. As evidenced in Table 1, representations under symmetric, instance, or asymmetric noise largely degrade over extended training periods. In such cases, early stopping is the typical strategy to prevent this degradation. Therefore, NoiseCluster might not match the performance of other SOTA methods for these specific noise types.
For a comprehensive algorithm to manage various label noises, we've proposed a combined strategy in Line 212. This involves pairing early stopping-based methods with NoiseCluster. Initially, the early stopping method tackles label noise such as symmetric, instance, or asymmetric. Subsequently, NoiseCluster continues to address SDN. By adopting this integrated approach, we ensure competitive performance against symmetric, instance, or asymmetric label noise, while also tackling a new type of real-world label noise.
>** **
Q2: Is the proposed subclass-dominant label noise (SDN) prevalent in real-world scenarios?
A2: We first address this question through logical reasoning, followed by showcasing examples of SDN from real-world datasets.
In real-world situations, label noise is not uniformly random (e.g., symmetric noise) but tends to follow certain patterns. For instance, samples from specific sub-classes may frequently be mislabeled, especially if annotators are unfamiliar with them. When humans annotate ambiguous samples, our approach isn't to assign arbitrary labels. Rather, we label based on our existing understanding. Hence, if an example is mistakenly labelled, other closely related examples (in the same subclass) could be similarly mislabeled. Labels generated from models may lead to this issue more serious, misclassifying similar samples in a consistent manner. This consistent mislabeling within sub-classes results in the prevalence of SDN in real-world datasets.
To confirm the presence of SDN in real-world datasets, we carry out experiments on Clothing1M and WebVision by carefully examining the images in clusters. Our observations indicate that SDN is widespread across classes in both datasets. These findings are showed in the attached one-page PDF.
>** **
More response to Reviewer 7AfS Question 6-10
>** **
Q6: In Table 5, is the method without label correction simply vanilla training?
A6: The result of 'Ours W/O CORRECTION' is not equivalent to vanilla training, which is denoted as 'CE' in Table 2. Our method consists of two crucial components: identification and label correction. When we say 'Ours W/O CORRECTION,' we mean that we do not correct the noisy labels, but we still identify potentially mislabeled examples. Once identified, we can exclude them from the confident examples. This removal is a simple yet effective strategy that results in a 6-8\% improvement. It is noted that the improvement of removal is built on the success of identifying SDN.
However, label correction remains a risky action, which may not work in a more complex situation. We will provide further clarification in Table 5 and Section 5.3. Thank you for pointing this out.
>** **
Q7: ELR should also be included in Table 2, as it does not require SSL (only ELR+ does).
A7: It's worth noting that ELR still retains some semi-supervised techniques, such as temporal ensembling and target probabilities. Since the performance of ELR+ falls below that of NoiseCluster (without semi-supervised techniques), the results might not differ significantly.
>** **
Q8: Regarding Cloth1M, why was only one epoch used for training? ... why these modifications are made.
A8: (1) Clothing1M is comprised of one million images with noisy labels. Even a single training epoch provides enough data for fine-turning a pretrained model, equivalent to 20 epochs on CIFAR-10. As a result, we only train the network for one epoch, making NoiseCluster not only the state-of-the-art method but also the fastest method on Clothing1M.
(2) Clothing1M is a real world noisy dataset, containing complex types of label noise, not only SDN. So, we employ a simple early stopping method, training on 95\% of the data, to remove other types of label noise, then apply NoiseCluster on the filtered dataset to continue to identify and correct SDN.
(3) The primary rationale for applying NoiseCluster to 5\% of the training data is to maintain the number of training examples on a par with CIFAR20-SDN, thereby minimizing the need for tuning DBSCAN hyperparameters. An additional consideration is the computational cost: the calculation of t-SNE for one million features takes around one hour, whereas it takes only about five minutes for 50k features.
>** **
Q9: The method doesn't appear to be specifically designed for SDN alone, ... under normal noise?
A9: NoiseCluster has been specifically designed to tackle SDN. For the performance of other types of synthetic label noise, such as symmetric or asymmetric, we response it in General Question 1.
>** **
Q10: it would be valuable to discuss self-supervised pretraining, ... even under SDN.
A10: It should be noted that our findings do not conflict with previous works. When using self-supervised pretrained models, the challenge of early stopping may be intensified. Since self-supervised learning is capable of uncovering underlying characteristics, it might enlarge the feature distances between sub-classes. This could lead to an acceleration in the speed of overfitting to SDN, making early stopping more ineffective. Supporting evidence can be drawn from the performance metrics of C2D (ELR+ with SimCLR) [2] on the Clothing1M dataset. C2D lags behind NoiseCluster by 0.93\% and even trails ELR+ by 0.23\%, despite the fact that the initialization of self-supervised pretrained models surpasses supervised learning.
Pdf: /pdf/4b3638c94196fa9a28d8146bc0f5c481bb9d7d97.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Online Ad Allocation with Predictions | Accept (poster) | Summary: This paper is for the online matching problem in the display advertising domain. Essentially there are shopper/supply side requesting coming in in a sequential order, and static advertiser side demand with budget constraint. The goal is to do some sort of global optimization (as opposed to greedy strategy to maximize value for each request t). The paper proposed a learning-augmented algorithm and demonstrated its effectiveness via a theorem as well as simulations.
Score changed from 6 to 7 after seeing the rebuttal from the authors.
Strengths: 1. The robustness against low-quality prediction. This is a very nice property to have for an algorithm to be practical. It seems the benefit doesn't vanish thought it decays. Given that the prediction quality is always something with uncertainty and affects the cumulative value for any algorithm, what is shown in this paper is already good.
2. Convincing simulation/experiment results using real-world dataset.
3. I particularly like line 170 - 191 which has done a nice job explaining the intuition of algorithm 1.
Weaknesses: 1. Don't know if the #impression over #advertiser ratio will significantly affect the simulation results, but I think this ratio are similar in those tested datasets. It's worth a little more discussion. I think for real-world scenario, the actual ratio won't be in the scale of 100.
2. I think the gain beyond the greedy algorithm that maximizes the single-impression utility is usually more important than the regret compared to OPT. Worth having related info in the paper.
3. How to handle the targeting constraint? This is very common in display advertising that an advertiser is only interested in a subset of impressions relevant to their own business. It effectively means one additional constraint in the primal problem.
4. The level of contribution beyond what is done in existing work. I didn't get the idea that Bama 2020 work forms this as an open problem though indeed they are relevant.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: see pros/cons
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their insightful comments and now address the mentioned weaknesses.
1. This is an interesting point and we have now conducted further experiments with synthetic data using #impression to #advertiser ratios from 20 to 2000. We did obtain similar results for these ratios when scaling budgets for the advertisers proportionally (if the budget is too low or too high, the instance becomes very easy and the baseline exponential averaging algorithm manages to extract close to 100% of the optimum value). We will add such experiments and a discussion to a revised version of the paper. The plots can be found in the PDF that was attached to the global comment above in Figure 2.
2. We thank the reviewer for this suggestion. Following the reviewer's suggestion, we added greedy algorithms as baselines to our experiments. We ran experiments with two greedy schemes: One which considers only the impression values (Greedy), the other the gain after disposal (Discounted Greedy). The evaluation shows that the two greedy schemes do worse than the algorithm of Feldman et al. (2009a) in all instances, except for the Discounted Greedy algorithm on synthetic data. We provide the experimental results in the PDF that was attached to the global comment above in Figures 1 and 2. From a theoretical perspective, the competitive ratio with respect to the optimal solution provides the strongest guarantee and this coincides with the robustness in our setting.
3. Targeting constraints can be directly modeled by setting the value $w_{at}$ of impression $t$ and an advertiser $a$ without the need for additional constraints. For instance, if an impression $t$ is not relevant for advertiser $a$, we can set $w_{at}$ to any negative value, which will ensure that the impression $t$ is never allocated to advertiser $a$.
4. Our algorithm and analysis are a significant departure from prior works. More specifically:
- Compared to Metha et al. (2007), which is the most closely related algorithm with predictions, we tackle problems that have less structure, thus require the use of free disposal and foremost new algorithmic ideas to incorporate predictions. A detailed comparison can be found in Lines 200 through 218.
- Compared to Feldman et al. (2009a), which is the most closely related algorithm without predictions, we develop a novel analysis to prove consistency. Feldman et al. (2009a) analyzes their algorithm in a local manner by relating the changes in the primal and dual objective values after each individual update to the solutions, as it is common for algorithms following the primal-dual framework (see e.g. the survey of Buchbinder and Naor (2009)). This local approach is not sufficient to obtain our guarantee. Instead, we analyze our algorithm via a novel global argument that expresses the objective values of the prediction and the final solution constructed by the algorithm as suitable linear combinations of the impression values. A direct comparison of the coefficients in the resulting linear combinations will not give our guarantee. Instead, we make several important observations that allow us to redistribute mass across coefficients (see Lemmas 4 and 9 in the appendix). The resulting analysis is delicate, as we can see for example in the proof of Lemma 9 in the appendix. We believe our novel approach could potentially be used to obtain improved learning-augmented algorithms for other problems in the primal-dual framework, such as the ones detailed in the work of Buchbinder and Naor (2009).
- Bamas et al. (2020) develop learning-augmented algorithms for covering problems using the primal-dual method but leave packing problems for future work in their discussion on future directions. One of the main problems considered by Bamas et al. (2020) is set cover, which is a fundamentally different problem to ad allocation, both with and without predictions. Our works and theirs do not overlap beyond the general use of the primal-dual framework.
---
Rebuttal Comment 1.1:
Comment: Thank you for your response and for addressing my questions especially on point 1 and 2, I will change my score to accept (from 6 to 7). | Summary: This paper studied Display Ads and the generalised assignment problem (GAP) where ad impressions arrive online and are allocated immediately to advertisers on the offline side. The difference between the two problems is that in Display Ads each impression takes uniform size of advertisers’ budgets, while in GAP the size is non-uniform. The paper considered the setting that incorporates predictions for allocation, which can be accessed by the algorithm upon arrival of each impression. The paper proposed a primal-dual algorithm based on Feldman et al. (2009a), with new ingredients to deal with predictions. Firstly, the new algorithm decides whether to allocate an impression to the predicted advertiser or to one with max discounted gain based on comparison involving a robustness-consistency trade-off parameter. Secondly, the algorithm updates the dual variable with an exponential mechanism involving the trade-off parameter. The paper provided theoretical analysis for the robustness and consistency factors, as well as experimental results on synthetic and real world datasets.
Strengths: 1. Online matching and related problems have been extensively studied in the literature and found their applications in online business. This paper studied Display Ads and GAP, which are generalized version of online bipartite matching and Ad Words, that could capture more complicated scenarios in practice. Worst-case guarantees can be restrictive, while external or historical information may be available for making online decisions. This paper proposed an efficient and effective algorithm in this setting.
2. The paper proposed a solid primal-dual approach with theoretical performance guarantee that incorporates predictions that is accessible in general forms. The performance of the algorithm improves over worst-case and random mixture baselines in experiments.
3. The paper is clearly written. The main text provides necessary intuition and discussion about the algorithm and analysis, as well as comparison with related work.
Weaknesses: 1. The paper only presented the proposed approach for Display Ads in the main text, but put the whole GAP part to Appendix. It would be helpful to briefly discuss in the main text how to extend the approach to GAP and take care of general impression sizes.
2. It is not clear if consistency is a good metric for evaluating the performance of algorithms with predictions. Unlike robustness, the consistency factor relies on a varying value of solution PRD to be compared against. For instance, in the upper left of Figure 3, the consistency value is high when PRD itself has low competitive ratio, while it is below 1 when PRD=OPT. It is hard to relate the absolute value of consistency to the quality of ALG.
3. The paper mentioned that the algorithm incorporates machine-learning predictions in abstract and introduction, which seems misleading. The actual algorithm receives a prediction PRD(t) for each impression t, which is presented in an abstract form and is not necessarily related to machine learning.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. In Theorem 1 the robustness R is a decreasing function with respect to \alpha, but in experiments the robustness has a trend of (slightly) increasing w.r.t. \alpha. What is the cause of this phenomenon?
2. The limit of R(\alpha) in Theorem 1 tends to be 1 when \alpha tends to be 0. How does R(\alpha) compare with the worst-case ratio 1-1/e?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: The paper briefly discussed limitations of the proposed algorithm.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their helpful comments and address the weaknesses in the following.
1. We thank the reviewer for this suggestion and we will add such a discussion to the main body. Our reason for discussing only Display Ads in the main body is that it contains some of the most important algorithmic ideas of our approach.
2. Our work follows the standard practice in the literature on learning-augmented algorithms to measure the performance of an algorithm via consistency and robustness. We agree with the reviewer that when the prediction is good, the consistency allows us to gauge how closely the algorithm follows it. For worse predictions, it is most informative to look at both consistency and robustness, as this tells us how well the algorithm is able to overcome errors in the prediction.
3. It is true that predictions can come from any source and we will clarify this in a revised version of our paper.
We now address the reviewer's questions.
1. This is an interesting point an we will add an explanation for this behavior to a revised version of our paper. In our theoretical analysis, we have to assume an adversarial input from our prediction, which means that the robustness necessarily declines the more we trust the prediction. In practice, a prediction is not chosen to be adversarial, even though its objective value can be much less than the optimum. In our experiments, we observe that for increasing $\alpha$, our algorithm is able to exploit “good suggestions” more while it still ignores “bad suggestions”. This explains why there is no immediate trade-off as in the guarantees of Theorem 1 and showcases the practical merit of combining worst-case algorithms with learned predictions, which may still err on some parts of the input.
2. In our paper, we only consider parameters $\alpha\ge1$. For $\alpha=1$ (which means that we do not trust the prediction at all), our algorithm is identical to the worst-case baseline due to Feldman et al. (2009a). In this case, we also recover the worst-case competitive ratio of $1-\frac{1}{e}$ in the large-budget case. | Summary: The paper discusses the problem of ad allocation and its generalization, the generalized assignment problem (GAP), which are two well-studied online packing problems with important applications in ad allocation and other areas. The paper presents an algorithm for both problems that incorporate machine-learned predictions and can thus improve the performance beyond the worst-case. The algorithm is based on the work of Feldman et al. (2009a) and similar in nature to Mahdian et al. (2007) who were the first to develop a learning-augmented algorithm for the related, but more structured Ad Words problem. The paper’s contributions are that it designs the first algorithms that incorporate machine-learned predictions for Display Ads and GAP. The two problems are general online packing problems that capture a wide range of applications. The algorithm follows a primal-dual approach, which yields a combinatorial algorithm that is very efficient and easy to implement. It is able to leverage predictions which can be learned from historical data. Using a novel analysis, the paper shows that the algorithm is robust against bad predictions and able to improve its performance with good predictions. In particular, it is able to bypass the strong lower bound on the worst-case competitive ratio for these problems. The paper experimentally verifies the practical applicability of its algorithm under various kinds of predictions on synthetic and real-world data sets. Here, it observes that its algorithm is able to outperform the baseline worst-case algorithm and the random-mixture algorithm.
Strengths: * Originality: lies in its use of machine-learned predictions to improve the performance of online packing algorithms for ad allocation, i.e., Display Ads and GAP.
* Quality: presents a rigorous analysis of its algorithm and provides experimental results that demonstrate its effectiveness.
* Significance: addresses an important problem in online advertising and provides a novel solution that can improve the performance of existing algorithms.
Weaknesses: 1. The paper does not provide a detailed analysis of the limitations of its algorithm. For example, it is not clear how the algorithm would perform in situations where the predictions are inaccurate or where there are other sources of uncertainty.
2. The paper does not provide a comparison of its algorithm with other state-of-the-art algorithms for online ad allocation. This makes it difficult to assess the relative performance of the proposed algorithm and to determine whether it is truly novel and effective.
3. The paper does not provide a detailed discussion of the practical implications of its algorithm for real-world ad allocation systems. This could be addressed in future work by providing a more detailed analysis of the computational requirements and scalability of the proposed algorithm, as well as by conducting experiments on real-world data sets.
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: See the Weakness section, especially Weakness 2.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 2 fair
Limitations: See the Weakness section, especially Weakness 1.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their comments and address the mentioned weaknesses in the following.
1. Our algorithm is specifically designed to perform well even if the predictions are inaccurate (from any source of uncertainty). In the field of learning-augmented algorithms, the measure of robustness is used to indicate how well the algorithm performs under an arbitrarily bad prediction. We show a theoretical bound on the robustness in Theorem 1 and experimentally validate our algorithm on inaccurate predictions in Section 4.
2. To the best of our knowledge, we compare our algorithm against the state-of-the art algorithms with theoretical guarantees for the problems we consider, both with and without predictions. We are the first to develop learning-augmented algorithms for the problems Display Ads and GAP. For these problems, we compare against the state-of-the-art worst-case algorithm, which is due to Feldman et al. (2009a). This work achieves a competitive ratio of $1-\frac{1}{e}$ which is best possible in the worst-case. For the more structured Ad Words problem, to the best of our knowledge, the best known competitive ratio with predictions is achieved by the algorithm of Mehta et al. (2007). We evaluate our algorithm on Ad Words instances and compare with this work in Section C.3 of the appendix in the supplementary materials. We will reference this in the main body. We would appreciate any suggestions from the reviewer on other algorithms and we will try to add them to a revised version of our paper.
3. We do provide experiments on the real-world datasets Yahoo and iPinYou in Section 4 (Line 293) of our paper. These are the two main datasets that are publicly available. If the reviewer has suggestions for other datasets that are suitable, we are happy to try to evaluate our algorithms on these datasets. Our algorithm is very efficient and scalable, and we ran all our experiments on a laptop with an i7-1165G7 CPU and 16Gb of memory. We are happy to add a discussion on the scalability of the algorithm in the next revision. We agree with the reviewer that exploring the implications of our work on real-world ad allocation systems is an interesting direction for future work. | Summary: This paper considers the online display ads and generalized assignment problems under free disposal in the "algorithms with predictions setting". In the display ads problem there are offline advertisers $a$ with budgets $B_a$. A sequence of impressions arrive online with differing values to each advertiser (impression $t$ has value $w_{at}$ to advertiser $a$). The algorithm must either allocate each impression to an advertiser, earning its value and consuming one unit of the advertiser's capacity, or reject the impression entirely. Decisions are made irrevocably, but in the free disposal model the algorithm may assign more impressions to an advertiser than their capacity allows but the algorithm only gains the value of the most valuable impressions that fit within the capacity. The generalized assignment problem generalizes the display ads problem by allowing impression $t$ to instead consume $u_{at}$ units of advertiser $a$'s capacity.
In the "algorithms with predictions" setting the online algorithm also has access to potentially noisy predictions about the optimal solution. In particular, this paper assumes that the algorithm may access a prediction $PRD(t)$ when impression $t$ arrives which gives a hint about which advertiser to allocate impression $t$ to in an optimal solution in hindsight. The prediction may be incorrect, so two quantities are analyzed: consistency and robustness. Consistency is given by the worst-case ratio of $ALG/PRD$, where $ALG$ is the algorithms solution quality and $PRD$ is the predictions solution quality. Robustness is given by the worst -case ratio of $ALG/OPT$, where $OPT$ is the optimal solution in hindsight.
Theoretically, this paper gives algorithms with non-trivial tradeoffs between robustness and consistency. The algorithm is based off of the primal-dual algorithm used by Feldman et al. 2009, but modified to account for the predictions. Proving their consistency bound needed new techniques since a local analysis as is typically used in the primal-dual method is insufficient here.
The authors complement the theoretical results with an experimental analysis on both real and synthetic data of their proposed algorithm using various predictions as input. The experiments compare to the worst-case baseline of Feldman et al. 2009 and the simple random-mixture algorithm they discuss in the introduction.
Strengths: This paper considers online allocation problems motivated by advertising applications in the algorithms with predictions setting. Prior work has considered related problems in this setting. This paper considers more general versions of these problems and also considers the free-disposal setting, which has not yet been considered in the algorithms with predictions literature. The paper gives strong theoretical guarantees for their algorithm and complements this with a thorough experimental analysis.
Weaknesses: - Some of the presentation of the results in the experiment section could be clarified/improved. See comments/questions below.
- The improvement over the worst-case baseline using "more realistic" predictions is somewhat narrow (although in practice this smaller improvement could matter in some cases).
- Theoretically, it is not clear if the trade-off between robustness and consistency is tight. Giving tight guarantees has become of interest recently, see the two references below:
- Wei, A., Zhang, F., "Optimal Robustness-Consistency Trade-offs for Learning-Augmented Online Algorithms." NeurIPS 2020.
- Jin, B., Ma, W., "Online Bipartite Matching with Advice: Tight Robustness-Consistency Tradeoffs for the Two-Stage Model." NeurIPS 2022. (this paper may also be good to discuss in the related work).
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: ## Questions
- Can you
- Please clarify what is meant by "prediction competitiveness" in the plots in Figure 4.
- What was observed experimentally for the competitive ratio of the worst case baseline?
- In Figures 3 and 5 the average competitive ratio across 5 trials is reported in parentheses for different predictors. Since there is a hypereparameter $\alpha$ that is varied can you clarify which value of $\alpha$ was used to produce these numbers?
## Comments
- For the plots in figure 2, it might be helpful to visually compare with the consistency/robustness guarantees that would be guaranteed by the random-mixture algorithm as well as the algorithm due to Mahdian et al. 2007.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: I believe the authors have adequately addressed potential limitations and there is not a high potential for negative societal impacts from this work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their insightful comments and address the weaknesses in the following.
1. We thank the reviewer for the suggestions for improvements. We will incorporate them in the paper.
2. Indeed, the improvement over the worst-case algorithm when using the Dual Base prediction is small. We discuss the performance of our algorithm with the Dual Base prediction in Section C.2 under “Results”. We observed in all of our datasets, both in real-world and synthetic instances, that the Dual Base prediction is a poorly performing algorithm despite its strong theoretical guarantees (e.g. on the iPinyYou dataset, Dual Base achieves a competitive ratio of 64% while the worst-case algorithm of Feldman et al (2009a) achieves 95%). We speculate that the reason is that it constructs a fixed allocation rule based only on an initial sample. Moreover, we observe that the errors of the Dual Base algorithm may mislead our algorithm. We observe that the better performing predictors which we obtain by corruptions of the optimum lead to much better results.
3. We thank the reviewer for the suggestion and the references. We will cover the references and include a discussion on tightness in the revision. Our analysis technique can be brought further to achieve a slight improvement in the guarantees. However, doing so is quite technical and we omitted it in the interest of simplicity and conciseness. We think it is a interesting direction for future work to see whether a different analysis can yield more significant improvements.
We also address the reviewer's questions.
1. The prediction competitiveness is the ratio of the objective value of the predicted solution and the optimum, i.e. $\mathrm{PRD}/\mathrm{OPT}$. We will add this to the description.
2. The competitive ratio of the worst-case baseline is 95.8\% on the iPinYou instance, 87.6\% on the Yahoo instance in Figure 3, and 90.6\% for the synthetic instance in Figure 5. The black line in the plots shows the robustness of the worst-case baseline, which coincides with the competitive ratio.
3. The competitive ratio of the predictors are independent of $\alpha$. The value of $\alpha$ is used only by our algorithm, alongside a predicted solution.
We thank the reviewer for the comment on Figure 2 and we will add the other guarantees to a revised version of our paper.
---
Rebuttal Comment 1.1:
Comment: Thank you for your response, I will keep my score the same (accept). | Rebuttal 1:
Rebuttal: Following reviewer eG65's suggestions, we ran additional experiments with two additional baselines and varied the number of impressions in our synthetic instances. The two additional baselines are two greedy schemes: One considers only the impression values (Greedy) and the other the gain after disposal (Discounted Greedy). The results can be found in the attached PDF.
Pdf: /pdf/49141dc584867c00ebd3bbd9503b94551ecc2ae6.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Abide by the law and follow the flow: conservation laws for gradient flows | Accept (oral) | Summary: The authors study conservation laws in the gradient flow dynamics of neural networks.
They introduce a notion of local factorisation of the loss, stating that the loss in the neighbourhood of a given wieght vector can be decomposed in a composition of functions, a data-independent term $\phi$ followed by a data dependent one $f$.
Under this decomposition, the authors set out to characterise conserved quantities for fixed $\phi$ and for all $f$ giving rise to a proper ERM loss.
They claim that under some assumption, and for ReLU networks, this is equivalent of characterising conserved quantities for fixed $\phi$ and for all smooth $f$.
More strongly, they claim that this is also equivalent to characterising conserved quantities for fixed $\phi$ and for a special finite dimensional subspace of function, living in the linear span of the $d$ rows of the Jacobian of $\phi$, where $d$ is the dimension of the codomain of $\phi$.
The authors then notice that the conservation laws of linear and ReLU networks will be polynomial in the weights, and consider the question of characterising maximal sets of conservation laws.
They link such number to the dimension of the Lie algebra generated locally by the $d$-dimensional vector field associated to the Jacobian of $\phi$, and work-out explicitly some examples.
Finally, the authors consider the question whether the known conservation laws for two-layers linear and ReLU networks form a maximal set, and find an affirmative answer.
They conjecture, and briefly discuss this point in Appendix 9, that the same results hold for larger depths.
Strengths: - The article is well written, and guides the reader nicely through quite technical results without requiring too much previous knowledge.
- The results seem novel to the extent of my knowledge of the literature, which is not comprehensive on this subject. They seem to fit nicely into a pre-existing line of works (see ref [13] and [26] for example).
- The main text provides reasonable justification for most of the formal results, which are proven in the Appendix. While I did not check the correctness of the proofs, the results seem reasonable.
Weaknesses:
- It is not clear to what extent the factorisation property the authors introduce is necessary to their analysis. Said differently, it is not clear wehther the analysis could be performed similarly on a concrete example (say fixed dataset) without any mention of a factorisation. The authors could add additional high-level explanations/justifications of this concept.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors:
## Major questions
- It is not clear to me whether the requirement in Eq. 2 is vacuous, meaning that any architecture automatially satisfies it. Can't I just take $\phi$ as the identity map, and $f = \mathcal{E}$? If this trivial factorisation is possible, are the results still non-trivial? Maybe I am losing some very simple nuance. In any case, adding a counter-example, or clarifying better the requirements on $\phi$ and $f$ could avoid doubts.
- Example 2.2 provides only a local factorisation for ReLU networks. Of course locality is ok as we are considering gradient flow dynamics, which is local. But I wonder whether something special can happen at the boundaries of the set $\Omega$ defined in Example 2.2, i.e. if there is some gluing condition/gluing phenomenon that may affect the results presented by the authors.
- Is there a commonly used loss for which Eq. 7 is not satisfied?
- The authors stress that their results allow for explicit construction of maximal sets of conserved quantities, yet in the manuscript they provide only an a posteriori verification that known conserved quantities in previously studied architectures indeed form maximal sets. Is there an architecture where new conservation laws can be found through the presented techniques?
## Suggestions for manuscript improvement
line 50: missing closing bracket
line 106: specifying that $D$ is the number of weigths, and $d$ is the dimension of the "internal representation" of the decomposition $\mathcal{E} = f \circ \phi$ would be helpful for the reader.
line 119: in the inlined equation, one has a tensor $\phi$ with three free indices equal to an expression without the same three free indices. I suggest to clarify this writing somehow.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: The authors briefly discuss limitations in the main text.
I would add that the analysis seems to be limited to linear and ReLU architectures 2 layer architectures. It is not clear whether a local factorisation of the form Eq. 2 can be found for other architectures. The authors could maybe add a discussion on this point.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We would like to thank the reviewer for the positive comments and constructive suggestions.
**Weaknesses and Questions addressed**
> **L1e** It is not clear to what extent the factorisation property the authors introduce is necessary to their analysis. Said differently, it is not clear whether the analysis could be performed similarly on a concrete example (say fixed dataset) without any mention of a factorisation. The authors could add additional high-level explanations/justifications of this concept.
Thank you for the insightful observation. When you fix the dataset and the loss, you're essentially dealing with a singular vector field, specifically: $\theta \mapsto \nabla \mathcal{E} (\theta)$. In this particular scenario, our framework is directly applicable. Given that we're examining a singular vector field, its associated Lie algebra has a dimension of one. This results in $D-1$ conserved quantities. However, it's worth noting that these quantities intricately depend on both the chosen dataset and the specified loss. Consequently, the utility of such an analysis, which is inherently data-dependent, may be somewhat limited. We'll be sure to include this point as a clarifying remark in our work.
> **Q1e** It is not clear to me whether the requirement in Eq. 2 is vacuous, meaning that any architecture automatically satisfies it. Can't I just take $\phi$ as the identity map, and $f = \mathcal{E}$? If this trivial factorisation is possible, are the results still non-trivial? Maybe I am losing some very simple nuance. In any case, adding a counter-example, or clarifying better the requirements on $\phi$ and $f$ could avoid doubts.
Thank you for raising this point! If you adopt this simplistic factorization, then $ \partial \phi (\theta) = I_D$, leading to $V_\phi(\theta) = \mathbb{R}^D$. This means that for this particular $\phi$, there isn't any conservation law. In essence, such a parametrization doesn't have the requisite "tightness" to yield an equivalent of Theorem 2.8. This is a very good remark and we will add it to clarify!
> **Q2e** Example 2.2 provides only a local factorisation for ReLU networks. Of course locality is ok as we are considering gradient flow dynamics, which is local. But I wonder whether something special can happen at the boundaries of the set Ω defined in Example 2.2, i.e. if there is some gluing condition/gluing phenomenon that may affect the results presented by the authors.
In the case of linear / ReLU networks, our analysis shows that there are no conservation laws beyond existing conservation laws, which are known to be global. This addresses the gluing issue for such cases. For more general settings compatible with our analysis, which is indeed only local, gluing conditions are an interesting but possibly difficult challenge, we will comment a bit on this in the revised version.
> **Q3e** Is there a commonly used loss for which Eq. 7 is not satisfied?
As mentioned in Remark A.7, Eq. 7 is not satisfied for cross-entropy loss. However, it is possible to envision weaker assumptions on the span involved in Eq. 7 to extend the theory to such a loss. This is an interesting challenge left for further work.
> **Q4e** The authors stress that their results allow for explicit construction of maximal sets of conserved quantities, yet in the manuscript they provide only an a posteriori verification that known conserved quantities in previously studied architectures indeed form maximal sets. Is there an architecture where new conservation laws can be found through the presented techniques?
In the manuscript, Section 2.4 provides an algorithm (Algo<1>) that constructs directly all polynomial conservation laws. By comparing the number of independent polynomial conservation laws with $D - m$, with $m$ the dimension of the trace of the generated Lie algebra (whose algorithm (Algo<2>) is described in section 3.3), we found that the numbers match and that all polynomial conservation laws found by Algo<1> correspond to the ones already known by the literature. By Algo<2>, we know that there are no other conserved quantities.
The only “a posteriori” reasoning in our analysis is to say that all conserved quantities are in fact global as discussed in our answer to **Q2e**.
Regarding the *discovery* of new conservation laws: for architectures involving piecewise polynomial activations, we expect that a polynomial $\phi$ yielding the factorization $f \circ \phi$ can again be found, allowing to conduct the same analysis but different (polynomial) conservations laws. The main challenge, left to future work, would be to establish an equivalent of Th 2.8.
> **L1e** The authors briefly discuss limitations in the main text. I would add that the analysis seems to be limited to linear and ReLU architectures 2 layer architectures. It is not clear whether a local factorisation of the form Eq. 2 can be found for other architectures. The authors could maybe add a discussion on this point.
For deeper architectures, we still have a local factorization $\phi$ of the form Eq 2, as mentioned in our paper (ll 122-123, appendix A.2, section 4.2). It will be a good challenge to generalize theorem 2.8 to deeper cases, which we will do for further work.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for addressing in detail the points I raised in the review.
In particular, the authors adequately addressed my main doubts regarding the role of the factorisation introduced in the paper.
After assessing their comments, I recognise that the factorisation is indeed non-trivial and bears non-trivial consequences.
I decided to update my overall grading to 7 to reflect this. | Summary: This paper studies the conservation law of gradient flow dynamics for training neural networks. The authors propose a method to determine the number of conservation laws in given gradient flow dynamics using Lie algebra generated by the Jacobian vector fields. It is shown, either theoretically or empirically, that the known conservation laws in training linear networks and ReLU networks are also maximal.
Strengths: 1. The paper is well-written and clear. The conversation laws in gradient flow dynamics for training neural networks have facilitated the analysis of convergence and implicit bias, thus deserving formal analysis on finding these conversation laws given any network architecture.
2. This paper has an in-depth discussion of the conservation law under the gradient flow on a class of loss functions.
3. Analysis via Lie algebra that determines the maximum number of conservation laws.
4. Showing existing conservation laws studied for linear and ReLU networks are maximal
Weaknesses: none
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Do the results stated for ReLU networks hold for any network with homogeneous activation function?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 4 excellent
Limitations: authors discussed the limitation.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We would like to thank the reviewer for the positive comments and constructive suggestions.
**Question addressed**
> **Q1d** Do the results stated for ReLU networks hold for any network with homogeneous activation function?
Yes, our results also apply to networks using any positively $p$-homogeneous activation function. Specifically, for a 2-layer NN given by $g_{\theta}(x) = U \sigma(V^\top x)$, the conserved quantities are $\|u_i\|^2 - \|v_i\|^2/p$, where $u_i$ and $v_i$ are the columns of $U$ and $V$, respectively. We plan to include this example in the supplementary material of the final version. | Summary: The paper discusses the geometric properties of gradient descent dynamics in ML models. The authors aim to understand the properties of the optimization initialization that are preserved during the dynamics, which is often referred to as being an "implicit bias" of the training algorithm. They also focus on the maximal sets of independent quantities conserved during gradient flows. They have an interesting approach to find the exact number of these conserved quantities by performing algebraic manipulations on the Lie algebra generated by the Jacobian of the model.
The paper's contributions include formalizing the notion of a conservation law in the setting of training neural networks, proposing an algorithm to identify simply expressed (e.g., polynomial) conservation laws on ReLU NNs, and illustrating how these findings can rewrite an over-parameterized flow as an "intrinsic" low-dimensional flow. I find it very intriguing that the commonly reported conservation laws in the literature happen to be maximal (at least empirically).
Strengths: The manuscript has many strengths and overall I think this is a welcomed contribution to the literature. The manuscript covers various aspects of gradient dynamics, conserved functions, conservation laws, Lie algebra, and their applications in neural networks with illustrative examples. There is also potentially interesting practical consequences of this work. It seems to be a relatively practical approach for determining the number of conservation laws using Lie Group computations, at least on NNs with piecewise linear activation functions. It is interesting that their algorithms confirm (at least empirically) that the conservation laws for ReLU NNs match the laws already known.
Weaknesses: A few weaknesses are:
- Mostly restricted to deep shallow NNs, continuous-time gradient descent, and simple NN architectures. This limits the applications of the theory to practical situations. The continuous-time restriction on the gradient descent training algorithm is perhaps the more
- In most situations, the generated Lie algebra is going to be infinite-dimensional. In fact, the two examples in the manuscript are contrived so that the Lie algebra ends up being finite-dimensional. The discussion on the case when the Lie algebra is infinite-dimensional, is only briefly discussed. I would suggest that the author discussed this more. In particular, the stopping criteria are based on the trace of Lie group algebra.
- The above discussion is particularly important for hoping to apply these techniques on NNs with activation functions that are not piecewise linear.
Technical Quality: 4 excellent
Clarity: 3 good
Questions for Authors: - Can you add more discussion on the situation where the Lie algebra is infinite-dimensional?
- Can you add more discussion on what happens when the activation functions of the NN are not piecewise linear?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 3 good
Contribution: 3 good
Limitations: There are no potential negative societal impacts of this work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We would like to thank the reviewer for the positive comments and constructive suggestions.
**Weaknesses and Questions addressed**
> **W1c** Mostly restricted to deep shallow NNs, continuous-time gradient descent, and simple NN architectures. This limits the applications of the theory to practical situations. The continuous-time restriction on the gradient descent training algorithm is perhaps the more
Experiments that display the approximate conservation of these laws during the process of gradient descent (as opposed to gradient flow) can be found in Figure 1 of [7] for a 2-layer linear NN, in Figure 1c of [1] for a 3-layer linear NN, and in Figure 2 of [7] for a 3-layer ReLU NN. We plan to include a more detailed commentary on this observation.
In addition to the standard multilayer linear/ReLU architectures discussed in this paper, preliminary studies, beyond the confines of this work, suggest that more diverse ReLU architectures, which encompass aspects like residual connections and convolutional layers, adhere to such a factorization with polynomial $\phi$. With a suitable adaptation of Theorem 2.8, our entire framework should be applicable in these scenarios.
> **W2c** In most situations, the generated Lie algebra is going to be infinite-dimensional. In fact, the two examples in the manuscript are contrived so that the Lie algebra ends up being finite-dimensional. The discussion on the case when the Lie algebra is infinite-dimensional, is only briefly discussed. I would suggest that the author discussed this more. In particular, the stopping criteria are based on the trace of Lie group algebra.
Generally speaking, the Lie algebra generated can indeed be of infinite dimension. This is particularly the case for deeper linear neural networks where $q > 2$, a point we intend to address in the final version. However, the *trace* of the generated Lie algebra is invariably finite-dimensional—it is bounded by $D$, the total number of parameters—and this trace is our focal point when considering the stopping criterion. As an analogy, the set of all smooth real-valued functions constitutes an infinite-dimensional Lie algebra, yet its trace at any given point corresponds to the finite-dimensional space $\mathbb{R}$, thus having a dimensionality of one. Given that the trace has a finite dimension, we can deduce a basis for it. Consequently, the stopping criterion will be met in a maximum of $D$ steps. We will emphasize this distinction in our work.
> **W3c**, **Q1c**, **Q2c** The above discussion is particularly important for hoping to apply these techniques on NNs with activation functions that are not piecewise linear. Can you add more discussion on the situation where the Lie algebra is infinite-dimensional?
Can you add more discussion on what happens when the activation functions of the NN are not piecewise linear?
Our theory readily accommodates an infinite-dimensional Lie algebra, given that its trace remains finite-dimensional. As an illustration, for deeper linear networks (where $q > 2$), the Lie algebra does become infinite-dimensional. Yet, our theory remains applicable, especially since theorem 2.8 is valid for linear networks irrespective of their depth. We will elucidate this point in the final version. When dealing with more intricate activation functions, our results are directly applicable for any positively $p$-homogeneous activation function. Specifically, for a 2-layer NN represented as $g_{\theta} (x) = U \sigma(V^\top x)$, the conserved quantities are given by $\|u_i\|^2 - \|v_i\|^2/p$, where $u_i$ and $v_i$ denote the columns of $U$ and $V$ respectively. We plan to incorporate this example in the supplementary material of the final version. An intriguing avenue for exploration would be to extend these findings to encompass piecewise linear and piecewise-polynomial activations. | Summary: This paper studies the conservation laws, which are quantities that remain constant, in over-parametrized gradient flows. The authors provide a formal definition for independent conserved functions, which are required to have linearly independent gradients. By applying Frobenius theorem, the authors show that the number of independent conservation laws is linked to the dimension of the trace of the Lie algebra generated by the vector fields spanned by the Jacobian. When this vector field is a Lie algebra, Frobenius theorem can be applied directly to obtain the number of independent conservation laws. When the vector field is not a Lie algebra, the generated Lie algebra need to be computed before applying Frobenius theorem. The authors explicitly compute the Lie algebra and its trace for two layer linear networks and certain ReLU networks, obtained the number of independent conservation laws, and prove that the conservation laws discovered in previous literature are complete. They implement their algorithm in SageMath that constructs a basis of polynomial conservation laws for the above examples, and successfully verify the number of independent conservation laws.
Strengths: - This paper is the first to formally define and study the number of independent conservation laws in gradient flow. Previous works mostly focus on finding conserved quantities in different architectures or using them in convergence proofs. This paper provides a new perspective and contributes to a unified framework of conservation laws in gradient flows.
- Using Frobenius theorem to characterize the number of independent conservation laws is novel. By linking to the dimension of the trace of the Lie algebra generated by the Jacobian, the authors present the first known method to determine the number of conservation laws. This method yields the interesting result that known conservation laws are complete in 2-layer linear networks.
- The idea that conserved functions define invariant hyper-surfaces which trap the gradient flow is interesting and useful. Studying the dimension of these surfaces directly leads to the proposed definition of independent conserved functions.
- The authors provide a condition under which gradient flows can be recast as low-dimensional Riemannian flows (proposition 3.8), which has potential applications on choosing initializations for better convergence.
Weaknesses: - The paper’s contribution is overall limited in the aspect of applications. Explicit conservations laws are only given for two-layer linear networks and certain two-layer ReLU networks. The analysis is applied to continuous gradient flow only, and there is no discussion or experiment that verify how well the conservation laws hold in gradient descent. Many neural networks today have more complicated architectures, such as residual connections and various activations other than ReLU, and are often trained with different optimization algorithms, such as Adam. Therefore, while this paper is a promising start to understand implicit bias, more work is needed to obtain insights useful for common machine learning tasks.
- The abstract vaguely mentions “understanding desirable properties of optimization initialization in large machine learning models”, but the paper provides little supporting arguments. It is not clear what the desirable properties are, and whether it is possible to extend the conservation laws to more realistic settings in large models.
- The requirement to factor the cost in equation 2 seems strict - $f$ cannot depend on $\theta$ and $\phi$ cannot depend on the data and the loss $l$. Factorization for two-layer ReLU network (Example 2.2) is a good example that extends beyond linear networks. However, it is not clear whether this is possible with other activation functions, where the pre-activation is not piecewise linear.
- There are a few cases where definitions and theorems are mentioned well before the formal statement, for example, in line 134-135, 156-157, 168, 270, etc. Perhaps the organization could be improved to reduce the complexity of the logic flow.
Technical Quality: 4 excellent
Clarity: 3 good
Questions for Authors: - The factorization for two-layer ReLU network requires that $\epsilon_{j,x_i} = \mathbb{1} (v_j^T x_i) > 0$ is constant. How likely does this condition hold throughout the gradient flow?
- Would it be possible to include a brief summary of what the Frobenius theorem is about? This theorem appears to be an important foundation, but the form used in this paper (theorem A.12) appears different from the theorem in the given reference [10] (“Theorem 1.4.1. A nonsingular distribution is completely integrable if and only if it is involutive.”)
- Conservation laws is also an important concept in physics. Is the algorithm that constructs conservation laws related to methods in physics, such as the the Noether’s theorem or conserved quantities from the Killing vector field? Has there been similar analysis on the number of conservation laws for physical systems?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 3 good
Contribution: 4 excellent
Limitations: The authors included limitations by clearly stating the assumptions. There are no potential negative societal impacts of the work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We would like to thank the reviewer for the positive comments and constructive suggestions.
> **W1b** The paper’s contribution is overall limited in the aspect of applications [...]
For ReLU/linear networks *of any depth*, explicit conservation laws of $\phi$ are known (Prop 4.1) and our algorithms allow us to verify on a number of *deep (q>2)* ReLU/linear networks that there are no other ones (section 4.2). The only ingredient of our framework that is not yet extended to deeper (q>2) *ReLU* architectures is Theorem 2.8, which ensures that the conservation laws of $\phi$ (computable with our algorithms) are indeed exactly the conservation laws shared by all $\mathcal{E}$, for any dataset and loss. For linear networks, Theorem 2.8 is valid irrespective of their depth.
Experiments showing approximate conservation of these laws during gradient descent (instead of gradient flow) are given in figure 1 of [7] for 2-layer linear NN, figure 1c of [1] for 3-layer linear NN, and figure 2 of [7] for a 3-layer ReLU NN. We will add a more explicit comment on this fact. Regarding other optimization algorithms, while our framework leaves completely open the question of a similar analysis for stochastic algorithms, it seems feasible to adapt it to deterministic algorithms with momentum using their associated ODE. This is however out of scope and left for further work.
Beyond standard multilayer linear/ReLU architectures covered in this work, it seems feasible but technical (and beyond the scope of this paper) to show that more general ReLU architectures covering e.g. residual connections and convolutive layers, satisfy such a factorization with polynomial $\phi$. The main challenge for a follow-up is then to extend Theorem 2.8, which would make the whole framework applicable in such contexts.
For more general activation functions, our results directly apply when using any positively $p$-homogeneous activation function. For a 2-layer NN defined as $g_{\theta} (x) = U \sigma(V^\top x)$, the conserved quantities are given by $\|u_i\|^2 - \|v_i\|^2/p$, where $u_i$ and $v_i$ are the columns of $U$ and $V$ respectively. We plan to incorporate this example in the supplementary material of the final version. A compelling direction for future exploration would be to extend these results to piecewise linear and piecewise-polynomial activations.
> **W2b** The abstract vaguely mentions [...]
In general terms, "desirable properties" of initialization refer to characteristics that ensure convergence, and potentially faster convergence, to an optimal solution. For instance, the utilization of conservation laws in [6] and [*] demonstrates convergence, while [3] illustrates that initializing with certain values of the conserved function can lead to accelerated convergence. We will clarify this further in the final version of the paper. Also, refer to our response to **W1b**.
[*] "On the Convergence of Gradient Descent Training for Two-layer ReLU-networks in the Mean Field Regime" by S. Wojtowytsch, 2020, preprint.
> **W3b** The requirement to factor the cost in eq. 2 seems strict [...]
Eq. 2 is in fact not very demanding: we can always write $\mathcal{E} = f \circ \phi$ with $f= \mathcal{E}$ and $\phi = id$. However the number of conservation laws of $\phi= id$ is zero, and this trivial factorization fails to capture the existence and number of conservation laws as studied in this paper. This suggests that, among all existing factorizations $\mathcal{E} = f \circ \phi$, there may be a notion of an optimal one, such that an equivalent of Theorem 2.8 holds. This is an interesting challenge for future work. Thanks for this opportunity to clarify this point that will be mentioned explicitly.
Regarding the ability to handle other activation functions: piecewise linearity of $x \mapsto g_\theta(x)$ (as in the linear and ReLU cases) is not important, and extensions e.g. to positively $p$-homogeneous activations (e.g. the squared ReLU) can be achieved [see our response to **W1b**].
> **W4b** There are a few cases [...]
Thank you for this suggestion, we will keep it in mind for the final version.
> **Q1b** The factorization for 2-layer ReLU network requires that $\mathbb{1}(v\_j^\top x_i > 0)$ is constant. How likely does this condition hold throughout the gradient flow?
This condition will *not* be preserved throughout the gradient flow, however as soon as $\mathbb{1}(v\_j^\top x_i > 0)$ is *locally* constant we can conduct our analysis. Thus, apart from some instants along the trajectory where there are changes in these activations, the whole analysis is valid and allows to characterize the (number of independent) functions that are conserved.
> **Q2b** Would it be possible to include a brief summary of what the Frobenius theorem is about? [...]
Thank you for the suggestion! In the final version's supplementary material, we'll include a summary as you've recommended. Additionally, we'll provide a section that clarifies the translation of notations and vocabulary between the theorem mentioned in the given reference and our paper. When we refer to a "non-singular distribution", it implies that the dimension of the associated trace remains constant (refer to the definition of "non-singular" on page 15 of [10]). Being "involutively consistent" directly relates to our second assertion using the Lie bracket (see eq. 1.13 on page 17 of [10]). Lastly, "completely integrable" aligns with our first assertion regarding orthogonality conditions (refer to eq. 1.16 on page 23 of [10]).
> **Q3b** Conservation law is also an important concept in physics [...]
Our theorem is indeed related to invariance in the model (each invariance such as scaling in ReLu is associated to a conserved quantity). We will mention and clarify this connexion in the revised version. This being said, we were not able to draw a precise connexion with Noether theorem, and the settings where Noether vs Frobenius apply seem rather different.
---
Rebuttal Comment 1.1:
Comment: Thank you for the response. I appreciate the clarification on the factorization and the Frobenius theorem. I believe the theoretical contributions are significant and have increased my score. | null | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: The paper studies conservation laws for deep neural networks when trained under gradient flow. Here, conservation laws refer to functions of network parameters that are invariant under gradient flow and such laws can potentially help us understand the training dynamics but constraining the manifold of parameters to a low-dimensional space and define robust symmetries of the underlying flow. This line of research had proven to be extremely fruitful for other areas of science, such as physics. The paper studies the number of such conservation laws for generic loss functions and datasets by factorizing the network function. They derive an analytical recipe to derive these numbers and provide explicit examples for simple network architectures. Finally, they provide an algorithm to compute these numbers.
Strengths: The paper is mathematically engaging, well-written and the content is presented clearly.
While previous work extensively studied conservation laws, these work often restricted to specific architectures such as deep linear networks and shallow ReLU networks. This study, as far as I am aware, is the first to generically study symmetries in training dynamics for arbitrary loss functions and datasets, and can be considered as a first step towards understanding the implicit constraints coming from symmetries of various network architectures.
I believe this line of research may be impactful in many problems in DNN community such as pruning deep neural networks, principled approaches to building DNNs and more efficient training strategies.
Weaknesses: 1. The paper is highly technical and only provides generic mathematical tools without brining additional insights over previous findings. Space permitting, it would be helpful to show more examples that go beyond what is already known.
2. The main contribution of the paper is to derive the number of conserved quantities of a given neural network. There is no comment on the explicit constructions of such quantities in generic cases, and no application cases where these numbers can be helpful.
3. The main text lacks experiments, and the ones discussed in supplementary material are not sufficient. Explicit demonstration of conservation laws in simple neural network training might strengthen the paper.
4. The definition of the main algorithm is obscure. A step-by-step implementation might help for clarity.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: 1. At line 120, if I am not mistaken, the fidelity function should be $f(\phi) = \sum_i \ell\left(\sum_{j,l} \varepsilon_{j, x_i} \phi_{j,k,l} (x_i)_l, y_i\right)$, i.e. there is no summation over $k$ and the input is forgotten. Depending on how the authors feel, a subscript $f_\Omega$ can be added to emphasize locality.
2. In Eq. 3, a more generic expression should include the explicit derivative of the data fidelity function, since gradient flow may take us out the domain $\Omega$ for which $df/d\phi = \partial_\phi f$. Or are you implicitly assuming that infinitesimal gradient flow guarantees such deviations (for example in lazy learning regime)?
3. The statement at line 168 is ambiguous to me; do you mean $\nabla h \perp \chi, \forall \chi \in V$?
4. It seems to me that the effect of loss function decouples from the analysis, since the arguments only depend on $\phi$. Is it because the number of conserved quantities do not depend on the loss landscape but only the explicit form?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 3 good
Limitations: The applicability of the theory to practical neural networks is very restricted at the moment.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We would like to thank the reviewer for the positive comments and constructive suggestions.
**Weaknesses and Questions addressed**
> **W1a** The paper is highly technical and only provides generic mathematical tools without bringing additional insights over previous findings. Space permitting, it would be helpful to show more examples that go beyond what is already known.
Beyond ReLU and linear networks, which are 1-homogeneous, our results also apply to networks using any positively $p$-homogeneous activation function. Specifically, for a 2-layer NN given by $g_{\theta}(x) = U \sigma(V^\top x) $, the conserved quantities are $\|u_i\|^2 - \|v_i\|^2/p$, where $u_i$ and $v_i$ are the columns of $U$ and $V$, respectively. We plan to include this example in the supplementary material of the final version. While our framework and its generalization (refer to **W2a**) can cover more architectures, discussing them is beyond the scope of this paper.
> **W2a** There is no comment on the explicit constructions of such quantities in generic cases, and no application cases where these numbers can be helpful.
We believe that determining this number is a fundamental question in the study of neural networks, as it can put an end to the “quest” for potential additional laws. We disagree with the perception of a “lack of comment on explicit constructions”: we provide explicit algorithms to compute both the number of conservation laws and the laws themselves, particularly when the models are polynomial. One significant application of these conservation laws is their ability to demonstrate that, in certain cases, high-dimensional flows can be recast in lower dimensions, see Section 3.4. Another application, which we will further emphasize in the paper, aids in convergence proofs; for instance, Theorem 5 of [6] and Section 2.5 of [*], utilize balancedness conditions which are conservation laws. While it is beyond the scope of this paper, our theory could easily be applied to other architectures, including residual connections, convolutional layers, and piecewise polynomial activations.
[*] On the Convergence of Gradient Descent Training for Two-layer ReLU-networks in the Mean Field Regime, S. Wojtowytsch, 2020, preprint
> **W3a** The main text lacks experiments, and the ones discussed in supplementary material are not sufficient. Explicit demonstration of conservation laws in simple neural network training might strengthen the paper.
Numerical illustrations of conservation laws can be found, for instance, in Figure 1 of [7] for a 2-layer linear NN, in Figure 1c of [1] for a 3-layer linear NN, and in Figure 2 of [7] for a 3-layer ReLU NN. We will include these references. In the final version of the paper, we will incorporate such a figure into the supplementary material to further emphasize our main message.
> **W4a** The definition of the main algorithm is obscure. A step-by-step implementation might help for clarity.
Thank you for the suggestion, we will add a pseudo-code in the final version to clarify it.
> **Q1a** At line 120, if I am not mistaken, the fidelity function should be […] i.e. there is no summation over $k$ and the input is forgotten. Depending on how the authors feel, a subscript $f_{|\Omega}$ can be added to emphasize locality.
Thank you, indeed there was a typo, we will correct this with a simpler (and correct!) expression using
$ \phi_j = \phi_j(\theta) := u_jv_j^\top \in \mathbb{R}^{n \times m}$, $\phi(\theta) = (\phi_j )\_{j}$ and
$ f(\phi) = \sum\_i \ell( \sum\_j \epsilon_{j,x_i} \phi_j x_i, y_i)$.
> **Q2a** In Eq. 3, a more generic expression should include the explicit derivative of the data fidelity function, since gradient flow may take us out the domain $\Omega$ for which $df / d \phi = \partial\_{\phi} f$. Or are you implicitly assuming that infinitesimal gradient flow guarantees such deviations (for example in lazy learning regime)?
Indeed it is a good idea to clarify by first writing that the gradient flow on $\mathcal{E}$ is defined as $\dot{\theta}(t) = -\nabla \mathcal{E}(\theta(t))$. Since we assume the factorization $\mathcal{E} = f \circ \phi$ on a neighborhood of $\theta_0$, for sufficiently small $t$ we deduce that the gradient flow satisfies Eq. 3. Our analysis of conservation laws is *local* to avoid considering what happens when we leave the domain.
> **Q3a** The statement at line 168 is ambiguous to me; do you mean $\nabla h \perp \chi, \forall \chi \in V$?
The statement that we require is stronger: it means that $\nabla h(\theta) \perp \chi(\theta), \forall \chi \in V, \forall \theta \in \Omega$. In other words, it is a pointwise assumption: at every point $\theta$, the vector $\nabla h(\theta) \in \mathbb{R}^D$ is orthogonal to the subspace $V(\theta) \subseteq \mathbb{R}^D$.
> **Q4a** It seems to me that the effect of loss function decouples from the analysis, since the arguments only depend on \phi. Is it because the number of conserved quantities do not depend on the loss landscape but only the explicit form?
As summarized in ll 125–129, the main idea is indeed to decouple as much as possible the study of the conserved functions from the particularities of a dataset or a given loss. This is made possible when the factorization $\mathcal{E} = f \circ \phi$ holds, and under some assumptions on the loss (see eq.7).
---
Rebuttal Comment 1.1:
Comment: I thank the authors for their rebuttal. I will increase my score by 1. | null | null | null | null | null | null |
Point Cloud Completion with Pretrained Text-to-Image Diffusion Models | Accept (poster) | Summary: This paper proposes a optimization method for point cloud completion by leveraging a 2D diffusion model. By constructing a neural SDF field, the proposed method make sure that the represented object matches the input partial point cloud, while the rendering images fit a pre-trained Stable Diffusion by the guidance of SDS loss. The results show that the proposed method does not need large dataset like previous works do.
Strengths: 1. The proposed method does not need large dataset for training class specific completion model, making it more practical.
Weaknesses: 1. Looks like the proposed method simply adds a new constraint (matching surface with input partial point cloud) on DreamFusion-like structure and the rest are just existing modules and techniques. Although I believe this is technically not easy, the novelty is still limited.
2. In Figure 1, the author mentioned in distribution object and out-of-distribution object. However, this grouping method is meaningless for the proposed method since a pre-trained Stable Diffusion is utilized. The real out-of-distribution object for this model should be an object that even stable diffusion cannot generate a reasonable image for it, and clearly this situation is fatal for the proposed method. I think the author should not emphasize in or out-of-distribution here.
3. The comparisons in this paper seem to be unfair since previous works are trained under a totally different setting. The authors should show results of the same dataset (ShapeNet) with the previous works, which I believe existing works would outperform the proposed method.
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: 1. I notice that you still have a color MLP head for the object even though your task is just point cloud completion. Is it because rendering colored images is more favorable for Stable Diffusion? Is it possible to take off the color head and directly assign a fixed color to your neural SDF and add the corresponding color to the input text for Stable Diffusion to reduce the complexity of your neural SDF?
2. Please respond to the weakness part.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 2 fair
Limitations: yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate the comments and the feedback from the reviewer.
W1: Technical Contribution and Novelty:
To the best of our knowledge, and as stated by reviewer 1, our method is the first to use a pre-trained vision-language model for completing point clouds. That allows us to complete partial point clouds from unseen classes, and without collecting any 3D data, as required by previous approaches. Using the SDS-loss for point cloud completion is far from simple, as indicated by the ablation study in Figure 6 (main paper).
W2: The author should not emphasize in or out-of-distribution in Figure 1:
We thank the reviewer for this comment, and we will update the figure accordingly.
W3: The authors should show results on Shapenet:
Our work addresses the problem of completing point clouds captured ``in the wild”, rather than in close-world synthetic data. We expect methods trained to complete PCs of specific predefined classes on synthetic data to perform well in that setting, but as our paper demonstrates, they struggle when applied to open-world settings. We focus the experiments on real partial point cloud data captured by real sensors rather than on synthetic data sampled artificially from CAD models. Unlike the previous methods, our method was not trained on synthetic datasets so it would be unfair to base the comparisons on these datasets. Both, our method and the baselines, were not trained on real cases, so we find it fair to base the comparison on real-world data.
Q1: Is it possible to take off the color head?
We thank the reviewer for the interesting suggestion. We implemented this suggestion by setting a constant gray color to be the color of the object and changing the text prompt to contain the word “gray” in the object description. For rendering the object details we applied shading as suggested by [10]. The results are comparable to the results with the RGB branch in terms of Chamfer distance on the evaluation set (31.3mm compared to 30.5mm with the color MLP head). We conclude that such renderings are in the distribution of Stable Diffusion such that they can be used for completing the surface correctly given the signal from the point cloud. The color MLP head does help in understanding the semantic content that is being generated, especially in failure cases, but it can definitely be omitted for simplifying the network architecture if resources are limited. We will discuss this in the revised paper.
---
Rebuttal Comment 1.1:
Title: Following comment
Comment: Thank the authors for the reply.
For W1, I still think the fact that this work uses a pipeline very similar to DreamFusion and Magic3D without much insight makes the novelty very limited. Also, Figure 6 shows the effectiveness of SDS loss, but since SDS loss is not a novel idea, I don't get what you mean about "far from simple". To me, I would rather call it a simple but effective trick.
Although the novelty is very limited, I appreciate the effort in exploring new settings. Therefore, I will keep my rating for now.
---
Reply to Comment 1.1.1:
Comment: We thank the reviewer for the response and for the insightful discussion.
‘I don't get what you mean about "using the SDS-loss for point cloud completion is far from simple"’:
The SDS loss by itself, even when combined with the SDF representation, results in inferior results. Trying to render the SDF from random views and using the SDS loss, does not complete the point cloud successfully as indicated by our qualitative and quantitative ablation study in Figure 6 (main paper), “random cameras”. Our camera sampling is a key part that allows us to complete the object by sampling cameras in a controlled manner, starting from the viewpoint of the sensor that captured the points, and enlarging the range gradually. This allows us to work with a non-detailed text description of the object without any diffusion model personalization techniques. By starting from the sensor’s viewpoint, the simple description and the partial point cloud allow the completion to be compatible with both the text and the point cloud at earlier phases. Then, due to the gradual sampling, the rest of the completion consistently matches the already completed part and the text description, until the entire object is completed successfully. During this process, we verify that the diffusion model does not see an “unnatural” pose of the object, by aligning the camera coordinates with the plane that the object is located on. Technically, we verify that the horizontal axis of the camera is orthogonal to the normal of the plane. We will emphasize this contribution in Section 4.2. | Summary: This paper tackles point cloud completion with SDS-Complete, which aims at improving Out-Of-Distribution results with a pre-trained Stable Diffusion model for score distillation sampling (SDS). Specifically, the author propose to learn the MLPs for SDF and color with rendering-based losses with respect to the depth and the mask from the sensor's perspective. For novel views, SDS-Complete guides the rendering results with SDS which enforces a visual-language semantic. Experiments are conducted on the Redwood 3D Scans dataset with state-of-the-art results for OOD objects. The author also provides qualitative results on the KITTI LiDAR dataset.
Strengths: - Figures are clear and method descriptions are detailed.
- Results of comparison experiments are encouraging.
- The motivation is clear, and the relevant pipeline design is intuitive and effective.
Weaknesses: - Limited technical contribution. Including pre-trained vision-language models to improve out-of-distribution results does not bring new insights to the readers and the approach to include it (score distillation sampling in this paper) is not new.
- Limited evaluation. The experimental results are not enough to support this paper's claim. It would be good if the author can provide results on ShapeNet and other datasets.
- Weak ablation study. How much does SDS help with OOD data? It would be good if the author can provide step-by-step ablation results and more analysis on improvements with respect to OOD data.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: - Will the performance be affected if fewer initial points are used as the input?
- How much do the depth and mask inputs help in terms of performance?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: The author commented on the limitations of SDS-Complete in terms of the huge number of sampling views and the topology of objects to be completed. It would be good if they can further specify the runtime and the number of sampling views.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate the comments and the feedback from the reviewer.
W1: Technical Contribution and Novelty:
To the best of our knowledge, and as stated by reviewer 1, our method is the first to use a pre-trained vision-language model for completing point clouds. That allows us to complete partial point clouds from unseen classes, and without collecting any 3D data, as required by previous approaches. Using the SDS-loss for point cloud completion is far from simple, as indicated by the ablation study in Figure 6 (main paper).
W2: provide results on ShapeNet
Our work addresses the problem of completing point clouds captured ``in the wild”, rather than in close-world synthetic data. We expect methods trained to complete PCs of specific predefined classes on synthetic data to perform well in that setting, but as our paper demonstrates, they struggle when applied to open-world settings. We focus the experiments on real partial point cloud data captured by real sensors rather than on synthetic data sampled artificially from CAD models. Unlike the previous methods, our method was not trained on synthetic datasets so it would be unfair to base the comparisons on these datasets. Both, our method and the baselines, were not trained on real cases, so we find it fair to base the comparison on real-world data.
W3: How much does SDS help with OOD data?
Figure 6 in the main paper provides a quantitative ablation study addressing this question. It shows that without the SDS loss, the error is significantly higher. Following the reviewer’s request we present in a new Table 1 (Rebuttal PDF) the numbers with separated columns for in-distribution and OOD objects.
W4: Will the performance be affected if fewer initial points are used as the input?:
To address the reviewer’s question we run our method with 50% and 10% of the original input points for the evaluation set from the Redwood dataset. We found that there is no significant difference in the results (up to 1.5 mm difference in terms of average Chamfer distance). These results indicate that our method is robust to the number of input points. We will include this experiment in the revised paper.
Q1: How much do the depth and mask inputs help in terms of performance?:
Following the reviewer’s question we add to the ablation study two rows: “No Depth” and “No Mask”. The numbers are presented in Table 1 of the rebuttal PDF. It can be seen that “No Mask” has a more significant impact on the accuracy, and that without the depth loss, the error is higher by about 10 percent.
Q2: Further specifying the runtime and the number of sampling views:
See supplementary, Section 5, “Running time”. Our test time optimization method is slow compared to the feedforward models of the baseline methods, but works much better on real point clouds, and does not require any dataset of 3D shapes for training as required by the baseline methods.
---
Rebuttal Comment 1.1:
Comment: Thank the authors for their reply, which addressed my concern about the ablation study.
Albeit with limited insight, this paper made efforts toward zero-shot point completion. I do appreciate that and I think it's worth a raise in rating with more convincing experimental results.
---
Reply to Comment 1.1.1:
Comment: We thank the reviewer for the response and for the insightful discussion.
Following a suggestion by Reviewer BnfZ, we added additional quantitative results on KITTI. We include them here as well for making it easier to follow the discussion:
We follow PCN and calculated the Minimal Matching Distance (MMD). MMD is the Chamfer Distance (CD) between the output surface and the surface from ShapeNet that is closest to the input point cloud in terms of CD. We calculated this metric on the surfaces that were evaluated in our user study from two categories: car and motorcycle. These are the only categories that have associated Shapenet subsets which is a necessary condition for calculating the MMD metric. The mean MMD over the motorcycle and car shapes are presented in the table below, showing that our approach improves over baselines:
| | MMD ↓ |
|-------------|-------:|
| ShapeFormer | 0.035 |
| PoinTr | 0.039 |
| Ours | 0.030 |
We further computed the CLIP R-Precision metric [10] on all of our evaluated KITTI categories: “car”,”truck”,”motorcycle” and ”excavator”. This metric checks the accuracy of classifying a rendered image by choosing the class that maximizes the cosine similarity score between the image and the text: “a rendering of a <class name >” among all classes. We evaluated CLIP R-Precision on the output meshes of the different methods, each rendered from 360 degrees with azimuth gaps of 2 degrees (180 images for each surface). We report the mean accuracy below. Here again, our approach is substantially better:
| | Accuracy (%) ↑ |
|-------------|---------------:|
| ShapeFormer | 50.0 |
| PoinTr | 40.6 |
| Ours | 75.7 |
These metrics show that our method is better at reconstructing surfaces from partial real LiDAR scans compared with previous methods. We will include these experiments in the revised paper. | Summary: This paper proposes a point cloud completion method with the help of text-to-image diffusion model and formulate the point cloud completion as a test-time optimization problem. It exploits the SDS loss proposed in Dreamfusion to generate 3D given text prompt, with a text-to-image diffusion model. Additionally, it also utilizes the supervision from the input partial point cloud (and also the depth and mask from sensor) for better regularization. And the authors also propose a camera sampling strategy to let the optimized shape follows the input point cloud. Since the model is built on top of the diffusion model's prior, it doesn't have the severe OOD problem as for previous train-test point cloud completion baselines.
Strengths: By forming the task as a test-time-optimization problem, the proposed SDS-complete won't rely on training with large-scale datasets, and instead using the prior from text-to-image models, which greatly boost the performance on OOD classes. Different from uniform camera sampling in DreamFusion which is used to generate object from scratch, the authors also proposed a camera handling procedure with strong motivation, which helps the model to fit the input partial point cloud and prevents the diffusion model to effect this geometry.
Weaknesses: I think the proposed method is lack of enough ablation study. I didn't find ablation study (maybe just qualitative) of the proposed camera handling procedure. And also there might be some paper organization problem, see the questions below.
And I believe the time required is also a weakness comparing to train-test method, which I think should be discussed in paper, although currently hard to solve.
Some typos. In L231 I think it should be "multi-modal".
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1.I'm confusing with the epoch-iteration training schedule and the proposed camera handling. In the total 200K iterations (2k epochs x 100 iterations per epoch), are the different camera angles being sampled in a loop manner or just sample one angle for many iterations and never optimized again? I think the loop manner is the correct one, but if the rendering angles are looped again and again, I think it means the motivation of camera handling isn't that important.
2. I wonder what's the optimization time comparing with DreamFusion? Since in the point cloud completion task a stronger supervision or prior is given, I assume with shorter iterations it can get comparable geometry? (Although definitely more iterations the results can be better).
3. Some results seems to be unexpected. Like the second row in Figure3. The chair leg with point cloud is intermittent, but the leg behind without point cloud supervision is perfect. Is it because the point cloud is sparse and sometimes the L_p can have bad influence?
4. Is the low resolution 80x80 a bottleneck? I remeber it isn't the render resolution in DreamFusion.
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Yes, the authors have a discussion of their limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate the comments and the feedback from the reviewer.
W1: Missing Ablation study for camera handling:
Figure 6 in the main paper shows a table with a quantitative ablation study on the camera handling procedure (“Random camera: running without our camera handling that is described in Sec. 4.2”). It shows that without our camera sampling, the mean Chamfer error is significantly larger.
W2: Time required is also a weakness that should be discussed:
See the discussion of the limitations in L258. Our test time optimization method is slow compared to the feedforward models of the baseline methods but its quality is far superior on real point clouds, and does not require any dataset of 3D shapes for training as required by the baseline methods. We will emphasize this point more clearly in the revised version
Some typos In L231
We will fix this in the revised paper. Thank you.
Q1: Camera Sampling Missing Details:
L76 in the SM gives more implementation details. At each iteration, the cameras are sampled randomly from a uniform distribution with a limited range. This range increases during training, where the full range is applied from epoch 120. That allows the color and the geometry to be optimized consistently and gradually, starting from the areas that are covered by the sensor, to the areas without any input points. See the ablation study in Figure 6 of the main paper for quantitative and qualitative ablation studies on camera sampling.
Q2 What is the optimization time compared to DreamFusion?
Most of the time is consumed by pushing gradients for the SDS loss.
By default, DreamFusion is trained for 100 epochs. In our submitted version we let our method run for 2000 epochs. Following the reviewer’s question, we present in Table 2 (Rebuttal PDF) the effect of running our method for a shorter time. It can be seen that after 5% (100 epochs) of the total running time, our method already outperforms the baseline in terms of average Chamfer distance. Given more running time reduces our error rate even more.
Q3: Intermittent patterns in the legs of the chair
The thin structure of the input points is too weak for constraining the surface of the outside chair presented in Figure 3. Unfortunately, the SDS loss does not help in making the surface thicker since the rendering of the thin surface looks valid. This is due to our rendering process that uses VOLSDF which defines a smooth mapping from surface to density (Equation 5), and since the entire leg has SDF values that are close to 0, the legs get densities that produce valid low-resolution renderings. We demonstrate the issue in Figure 1 (Rebuttal PDF). We will discuss this limitation in the revised version of the paper.
Q4: Is the low rendering resolution 80x80 a bottleneck?
DreamFusion is trained with 64x64 rendered images. In general, higher resolution requires higher GPU memory where 64x64,80x80,128x128 require 16GB(Google Colab), 32GB(V100), and 80GB(A100) memory respectively.
---
Rebuttal Comment 1.1:
Comment: Thanks the authors for detailed reply. And I have also checked other reviewers' opinions and the authors' response.
I think the rebuttal answers my concerns. And I would like to see the epoch ablation part and the discussion of rebuttal's fig.1 be shown in the revised version. The VolSDF formulation entangled with the low-resolution rendering in this case reveals some underlying problem and I would like to see them be discussed in paper.
I also agree with other reviewers' opinions that the idea is with limited novelty. But I also think this work is with high degree of completion, and also shows some new scenarios in the certain case. For that, I would keep my rating as weak accept but a little prone to borderline side, any discussion is welcomed.
---
Reply to Comment 1.1.1:
Comment: We thank the reviewer for the response and for the insightful discussion.
Epoch ablation and rebuttal's fig.1
We will add these results to the revised version with a proper discussion.
“Limited novelty”
To the best of our knowledge, our method is the first to use a pre-trained vision-language model for completing point clouds. That allows us to complete partial point clouds from unseen classes, and without collecting any 3D data, as required by previous approaches. The SDS loss by itself, even when combined with the SDF representation, results in inferior results. Trying to render the SDF from random views and using the SDS loss, does not complete the point cloud successfully as indicated by our qualitative and quantitative ablation study in Figure 6 (main paper), “random cameras”. Our camera sampling is a key part that allows us to complete the object by sampling cameras in a controlled manner, starting from the viewpoint of the sensor that captured the points, and enlarging the range gradually. This allows us to work with a non-detailed text description of the object without any diffusion model personalization techniques. By starting from the sensor’s viewpoint, the simple description and the partial point cloud allow the completion to be compatible with both the text and the point cloud at earlier phases. Then, due to the gradual sampling, the rest of the completion consistently matches the already completed part and the text description, until the entire object is completed successfully. During this process, we verify that the diffusion model does not see an “unnatural” pose of the object, by aligning the camera coordinates with the plane that the object is located on. Technically, we verify that the horizontal axis of the camera is orthogonal to the normal of the plane. We will emphasize this contribution in Section 4.2. | Summary: This paper proposed a novel method for completing a 3D object from its incomplete input shape. It specifically focuses on the out-of-distribution objects and proposed a diffusion model based framework to generatively learns a SDF for the complete shapes. The idea is simple and easy to follow.
Strengths: The idea of using cross-modal generative network to complete partial shape is novel.
The writing and organization is good.
Weaknesses: 1. The completion of out-of-distribution objects aims for practical usage. However, the proposed SDS-Complete requires up to 5 different inputs for completing a partial shape, which is contrary to the original intention for out-of-distribution shape completion. Specifically, according to the description in Sec.4, SDS-Complete requires: 1) depth image; 2) segmented point cloud; 3) segmented binary mask; 4) representation of text; 5) internal parameters of the sensor, and each of them is indispensable. Therefore, the proposed method is highly impractical for real-world scenario compared with image reconstruction or shape completion from single input.
2. As for in-domain completion results visualized in Figure 3, the complete shape generated by PoinTr is highly suspicious and may be unfair. The chair is one of the most commonly used categories in shape completion experiments, and PoinTr has been referred by many previous studies to generate a robust prediction on this chair category. However, in Figure 3, PoinTr even fails to predict the missing chair legs, which in reviewer's opinion, is highly impossible if the training procedure is correct.
3. According to the above discussion of visualization results, the quantitative results in Table 1 are insufficient to prove the effectiveness of the proposed method in terms of out-of-domain shape completion task.
4. Moreover, in-domain comparison should at least be conducted on one of the popular completion benchmarks such as ShapeNet dataset, MVP dataset or PCN dataset, but none of them appear in the experiments.
5. The quantitative results on KITTI dataset is missing, only 5 samples in Figure 5 are highly insufficient to verify the effectiveness of the proposed method on KITTI dataset.
Technical Quality: 1 poor
Clarity: 3 good
Questions for Authors: See weaknesses
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 1 poor
Presentation: 3 good
Contribution: 2 fair
Limitations: The limitations has been addresses in the draft.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate the comments and the feedback from the reviewer.
W1: The method requires 5 input types
Our setup follows an existing protocol that is presented in ShapeFormer which uses real partial point clouds that are extracted from real-world sensors. In contrast to synthetic setups where each point cloud is artificially sampled from a single CAD model, the evaluated real point clouds are recorded by an active depth sensor (LiDAR or depth camera) that stores for each ray the distance of the point from the sensor location. Therefore, depth information is not a limitation for real point clouds since it is inherently available in real PC data and does not limit the method in practice. The internal parameters for each sensor are available online.
Note that these real sensors record a complete scene and therefore for isolating the relevant object from the scene (for any of the baseline methods), it is required to segment out the object from the scene. Regarding text input, some other completion methods [3-6] use different trained models for each class (tables, chairs, …), thus they assume that the object class is known.
W2: Complete shape generated by PoinTr is highly suspicious and may be unfair:
Note that PoinTr was trained and evaluated on synthetic data. Therefore its generalization to real data is limited. Moreover, PoinTr has some generalization issues even for Shapenet models e.g. Figure 4 in [7].
W3: The quantitative results in Table 1 are insufficient to prove the effectiveness of the proposed method in terms of out-of-domain shape completion task
According to Table 1, we improve over the baseline methods by 50% on real-world point clouds. Note that additional qualitative and quantitative examples are presented in the supplementary materials.
W4: Missing comparisons on Shapenet/MVP/PCN
Our work addresses the problem of completing point clouds captured ``in the wild”, rather than in close-world synthetic data. We expect methods trained to complete PCs of specific predefined classes on synthetic data to perform well in that setting, but as our paper demonstrates, they struggle when applied to open-world settings. We focus the experiments on real partial point cloud data captured by real sensors rather than on synthetic data sampled artificially from CAD models. Unlike the previous methods, our method was not trained on synthetic datasets so it would be unfair to base the comparisons on these datasets. Both, our method and the baselines, were not trained on real cases, so we find it fair to base the comparison on real-world data.
W5 : Comparisons on KITTI are quantitative using only 5 samples
We conducted a user study on 15 sampled KITTI objects. The results are presented in the supplemental material, showing that our method outperformed the baseline methods in 73.4% of the cases. These results demonstrate that our method outputs better completions on KiTTI in terms of quality. Since GT shapes for KiTTI are not available, it is not possible to compute automated metrics.
---
Rebuttal Comment 1.1:
Comment: Thanks for the detailed reply.
The reviewer is satisfied with some of the replies, but still got unsolved concerns. I would like to raise my ratings but still lean to rejection.
The number of KITTI samples for qualitative comparison is not the key problem, but the quantitative comparison. Many previous methods like PCN or Pcl2Pcl have introduced some metrics to evaluate results without GT for dataset like KITTI. On the other hand, the response in W1 still do not fully address the problem of requiring so much input, some of which are not always accessible in practice and will limit the implementation of the proposed method. For example, the internal parameters may not be easily found on some real scan dataset like ScanNet v2 or S3DIS, not to mention in industrial scenarios.
---
Reply to Comment 1.1.1:
Comment: We thank the reviewer for the response and for the insightful discussion.
The key remaining concern is that quantitative results for KITTI are missing.
Thank you for your insight and for providing concrete suggestions for evaluation metrics that allow us to evaluate KITTI even without GT data. Following the reviewer's suggestions, we follow PCN and calculated the Minimal Matching Distance (MMD). MMD is the Chamfer Distance (CD) between the output surface and the surface from ShapeNet that is closest to the input point cloud in terms of CD. We calculated this metric on the surfaces that were evaluated in our user study from two categories: car and motorcycle. These are the only categories that have associated Shapenet subsets which is a necessary condition for calculating the MMD metric. The mean MMD over the motorcycle and car shapes are presented in the table below, showing that our approach improves over baselines:
| | MMD ↓ |
|-------------|-------:|
| ShapeFormer | 0.035 |
| PoinTr | 0.039 |
| Ours | 0.030 |
We further computed the CLIP R-Precision metric [10] on all of our evaluated KITTI categories: “car”,”truck”,”motorcycle” and ”excavator”. This metric checks the accuracy of classifying a rendered image by choosing the class that maximizes the cosine similarity score between the image and the text: “a rendering of a <class name >” among all classes. We evaluated CLIP R-Precision on the output meshes of the different methods, each rendered from 360 degrees with azimuth gaps of 2 degrees (180 images for each surface). We report the mean accuracy below. Here again, our approach is substantially better:
| | Accuracy (%) ↑ |
|-------------|---------------:|
| ShapeFormer | 50.0 |
| PoinTr | 40.6 |
| Ours | 75.7 |
These metrics show that our method is better at reconstructing surfaces from partial real LiDAR scans compared with previous methods. We will include these experiments in the revised paper.
“The internal parameters may not be easily found on some real scan datasets”
We agree that requiring the internal parameters is a limitation of our method. We will discuss this limitation in the paper. With that said, it is important to note that any method for processing point clouds needs the camera's internal parameters for extracting a point cloud from a depth image. | Rebuttal 1:
Rebuttal: We thank the reviewers for their insightful comments. Below we answer separately for each reviewer.
Pdf: /pdf/7fb0b5ec50fc175a3a92302ef25328254636b7cb.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: This work proposes to use text-to-image diffusion model for OOD point-cloud completion. Similar to DreamFusion3D that trains NeRF accompanied by an SDS loss for 3D generation, this work applies the idea to point-cloud completion. Experiments show that the performance is good on both Redwood dataset and KITTI dataset.
Strengths: I think the proposed idea is simple yet novel. The results also show the effectiveness of the proposed method. The paper is also easy to follow.
Weaknesses: I notice in both figure 3 and figure 5, although the patterns/shape of the completed point-cloud is correct, it seems that they are not smooth which even makes it worse than ShapeFormer in Figure 3. This makes the performance of the proposed idea less attractive, despite the paper targets at OOD point-cloud completion.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: If I understand correctly, the method requires to overfit per-object via NeRF, on the other hand, since it requires inference pre-trained SD for each iteration, it seems that the training process is pretty slow. Can the authors provide the training details?
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: It seems that this work can only work for the cases of single objects since it heavily relies on the capability of pre-trained stable diffusion. I wonder whether it can achieve scene-level completion.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate the comments and the feedback from the reviewer.
W1: Generated completions are not smooth in Figure 3 and Figure 5
Overall our method produces better outputs qualitatively (see more examples in the SM, Figure 4,5 ), and quantitatively where our method’s Chamfer distance is lower by 50% overall, and by 30% for Shapenet classes (see Table 1 ) compared to ShapeFormer. Regarding Figure 5, in the user study (SM, Section 3), for 73.4% of the cases, the participants preferred our outputs over ShapeFormer’s outputs, which demonstrates that our method outputs better completions in terms of quality.
Q1: Provide details on running time:
See supplementary, Section 5, “Running time”. Our test time optimization method is slow compared to the feedforward baseline methods, but works much better on real point clouds, and does not require any dataset of 3D shapes for training as required by the baseline methods. We believe that it could be accelerated e.g. by combining recent acceleration techniques such as Instant-NGP but this is left as a future work.
L1: The method is limited to single-object:
The reviewer is correct. There are natural ways to extend this approach to scenes with multiple objects. For instance, starting with segmenting a scene, and splitting it into a disjoint set of partial point clouds. This tends to be easier if RGB appearance data is available. Then, our method can reconstruct each object separately, and the results can be merged back together to create the scene. Interestingly, our loss can be extended to take into account segmentation confidence. This is left for future work.
---
Rebuttal Comment 1.1:
Comment: Thanks for the reply. The authors address my concerns. But I still think the test-time overfitting is too slow. I think the overall work is interesting and I will keep my rate.
---
Reply to Comment 1.1.1:
Comment: We thank the reviewer for the response and for the insightful discussion. The slow running time is indeed a limitation, and we will extend the discussion about this point in the revised paper. Furthermore, we will add Table 2 from the Rebuttal PDF to the revised version that demonstrates the effect of running our method for shorter times and specifically shows that after 5% (100 epochs) of the total running time, our method already outperforms the baseline in terms of average Chamfer distance.
We do note that our method can solve in-the-wild cases that cannot be solved by existing methods without gathering additional data. Therefore, we believe that acceleration can remain a future challenge. | null | null | null | null | null | null |
On the choice of Perception Loss Function for Learned Video Compression | Accept (poster) | Summary: This paper studies the choice of perception loss for learned video compression and summarizes some valuable conclusions. The first one is the pros and cons of PLF-JD and PLF-FMD. The second one is the universality of MMSE reconstructions.
Strengths: This paper conduct experiments and give some theoretical analysis of the perception loss functions for learned video compression. It is valuable when designing and training a video compression pipeline.
Weaknesses: 1. The datasets used in the paper are too simple. It is questionable that the conclusions still work for the datasets that are closer to the real world. It is suggested to provide more experiments on the widely used datasets in learned video compression methods, such as UVG, MCL-JCV, and HEVC Class B datasets. Most papers conduct their experiments on these datasets. So we can see more performance comparisons and verify the proposed method is not a toy example.
2. Considering this paper focus on perception loss, why not try some metrics about perceptual quality? For example, the experiments can also provide widely used perceptual metrics such as LPIPS.
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: In the experiments, I do not know how the proposed method precisely controls the bit rate to make the rates of different methods to be identical.
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 2 fair
Limitations: See the weakness part.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank Reviewer 1Qbf for considering our work “valuable when designing and training a video compression pipeline”. Please find your concerns addressed below.
1) The datasets used in the paper are too simple. It is questionable that the conclusions still work for the datasets that are closer to the real world. It is suggested to provide more experiments on the widely used datasets in learned video compression methods, such as UVG, MCL-JCV, and HEVC Class B datasets. Most papers conduct their experiments on these datasets. So we can see more performance comparisons and verify the proposed method is not a toy example.
Reply: Please see the global rebuttal, the part on “scope of experimental results”.
2) Considering this paper focuses on perception loss, why not try some metrics about perceptual quality? For example, the experiments can also provide widely used perceptual metrics such as LPIPS.
Reply: Our focus is on the RDP tradeoff and validating these results using DL experiments. We chose the perception loss functions as they are natural counterparts of the theoretical metrics used. We also provide quality assessment metrics for your consideration. In the KTH experiment, we computed LPIPS, which is known as a full-reference perceptual metric for images. We compare LPIPS for each reconstructions (MMSE, 0-PLF-FMD and 0-PLF-JD) for each timeframe.
LPIPS (lower is better) on KTH dataset:
1st frame: MMSE - 0.1036, 0-PLF-FMD - 0.0584, 0-PLF-JD - 0.0584
2nd frame: MMSE - 0.0521, 0-PLF-FMD - 0.0313, 0-PLF-JD - 0.0594
3rd frame: MMSE - 0.0413, 0-PLF-FMD - 0.0232, 0-PLF-JD - 0.0613
This result aligns with our results on distortion loss, as the MMSE and 0-PLF-FMD tend to correct the reconstruction (so the ground truth and reconstruction look more similar over time). On the other hand, due to error permanence, the 0-PLF-JD reconstructions become different from their source sequence, causing the score to go up. Finally, we would like to thank you for this suggestion. We are going to add this set of experiments in the new version of our work.
3) In the experiments, I do not know how the proposed method precisely controls the bit rate to make the rates of different methods to be identical.
Reply: We control the bit rate by controlling the dimension of the latent variables D (encoder’s output) and the number of quantization levels L. The bit rate can then be calculated by $D \times log_2(L)$. While this approach is a bit sub-optimal, it makes controlling the bitrate much easier compared to the well-known approach by Balle et. al (see ref [13] of our paper). Also, it has been adopted in previous prior works for learned image compression, such as (ref [11], [16] and [12] of our paper).
We hope this answers your concern. We also ask you to kindly reconsider your evaluation. | Summary: The paper consider theoretical considerations of generative video compression.
While rate-distortion-perception theory (Blau & Michaeli, 2019) explains the trade-off between realism and distortion for generaive compression, the causal processing typically introduced for video models adds constraints that need to be considered.
The authors consider two settings: FMD (frame marginal distributions) and JD (joint distribution). In the FMD case, for full realism an observer cannot tell apart reconstructed frames and original frames when looking at one frame at a time, but there can be synthetic detail that is not temporally consistent. In the JD case, for full realism the joint distribution of reconstructed videos matches the originals.
The main results of the paper are:
Theorem 1: in the FMD setting, we can achieve full realism (“perfect perception”) by at most doubling the distortion of an MSE optimized system. This generalizes a result from (Blau & Michaeli, 2019 [16]) for the single frame scenario.
Theorem 2: In the JD setting, the authors obtain a similar result but for a more constrained setting. They need the source at time j to be nearly independent of the encoder outputs upto and including time j. Interestingly, they demonstrate an example in the appendix where the factor-of-two bound is not achieable.
The example can be summarized as follows: if you have some loss of information in reconstruction 1, you need reconstruction 2 to be consistent with it even if you have perfectly encoded the original frame. Hence you can’t achieve a zero reconstruction error in frame 2 despite having losslessly encoded it, violating the factor of two bound.
Theorems 3-6: Here the analysis is done for the more general rate-distortion-perception setting, where the perceptual quality/realism is sacrificed to improve distortion. The authors consider both the setting of first-order Markov sources and do further analysis in the gaussian special case.
Strengths: - Relatively readable paper
- Valuable insights into the generative video compression field
- The theoretical results are validated in experiments on small datasets (Moving MNIST and KTH).
Weaknesses: - Intuitively, why would we only consider frame level marginal distributions? Just like pixel-level marginal distributions don’t make sense, this should lead to massive temporal artefacts/flickering in the low rate regime. While e.g. [7] used only a frame-level discriminator, they use warping for temporal consistency.
- I found the proof of Theorem 1 to be quite similar to the proof of Thm. 2 in Appendix B in (Blau & Michaeli, 2019). It seems to me that proof also works as-is in the setting where you condition on previously encoded frames, as there are no assumptions on how the MSE optimized reconstruction is obtained (so whether it is using information from previous frames or not doesn’t seem to matter in the derivation). I would appreciate if the author could clarify differences here.
- Experiments are limited to toy data.
Minor (not affecting rating): *The supplement is very long* (30+ pages) and the main paper contains 24 references to it. It seems to me the authors have a lot more to say than fits into 9 pages. I'm not sure what to do with this but it feels slightly weird. I did not read everything in the supplement.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: -
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Toy data / causal setting.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank Reviewer tXFk for considering our work a “relatively readable paper” and “valuable insights into the generative video compression field”. Please find your concerns addressed below.
1) intuition on the use of PLF-FMD in the low-rate regime
Reply: Please note that in all our experiments the neural architecture we implement is similar to reference [7]. Our architecture also includes warping to match the locations of the current source frame using the decoded flow fields for temporal consistency. We demonstrate that such an architecture in conjunction with a frame-level discriminator (PLF-FMD metric in our paper) appears to provide the most desirable reconstruction when the I-frame is compressed at low bit-rates. This is because the proposed loss function is flexible enough to introduce meaningful corrections in the reconstruction in later frames as more information is received. While this might introduce some flickering effect, the effects are not random since the (high-fidelity) reconstructions tend to sync up with the source, as shown in Figures 2 and 7. In contrast, a discriminator that preserves joint distribution across frames (PLF-JD metric in our paper) is not only considerably more challenging to implement (due to the need for a different conditional decoder at each step/timeframe), but also suffers from an “error permanence” phenomenon — errors made in earlier frames do not get corrected in later frames, even when the decoder has sufficient information for doing so. We suspect that similar considerations between the joint and framewise metrics were made in prior works such as references [6] and [7], that also implement a frame-level discriminator. In [7], the authors mention (Section 3.5, second paragraph) that their attempt to include multiple frames in their GAN (a version of our JD) “did not significantly alter reconstructions”. Our work is the first attempt to theoretically study the impact of PLF-FMD and PLF-JD metrics using the framework of rate-distortion-perception (RDP) tradeoff. Our results in Theorems 1, 2, 5, as well as Table 2 (in the Appendix) provide insightful comparisons between the two metrics, which we further validate using experimental results. We note that our work is theoretically grounded in the study of RDP tradeoffs, and as is the case of prior works of Blau and Michaeli [11] and Zhang et al [16], we have also presented our experimental studies on two simpler datasets. We expect that larger scale experimental studies that incorporate more complex datasets will result in similar conclusions. For example, we expect errors made in the previous frames to continue to propagate when PLF-JD metric is used, regardless of which dataset is used and PLF-FMD to be more flexible in correcting such errors. We hope that you agree with the importance of the PLF-FMD metric and better appreciate its importance in the context of prior work. Finally, we note that there are other rate regimes where the choice of PLF-JD is preferred over PLF-FMD (see Table 2 of Appendix F and experimental results in Appendix J.5).
2) On the comparison of Theorem 1 with Thm. 2 in Appendix B in (Blau & Michaeli, 2019)
Reply: While direct extension of the achievable scheme in Thm 2 of (Blau & Michaeli, 2019) [11] by conditioning on previously encoded frames suffices to achieve the “factor of 2” bound in our Theorem 1, we note that (in contrast to [11]) the main result in Theorem 1 involves an exact characterization (see Eq. 5) of the distortion region. The “factor of 2 bound” is derived (see Eq. 6) as a consequence of this characterization. To establish our main result, in addition to the “achievability part”, we had to also establish a converse which is not present in [11]. Nevertheless we have not claimed significant novelty in this part of the work. As noted in the paragraph immediately below Theorem 1, our result in Eq. (5) is a generalization of an analogous result for single frame setting in Zhang et al (see reference [16] in our paper). The intuitive reason why Theorem 1 follows as a natural extension of previous works (on single frame) is because it considers a fixed encoder setting without explicitly considering the rate constraints. Nevertheless this setting provides insightful comparisons between the PLF-FMD and PLF-JD metrics as discussed in Section 3. We note that when the entire rate-distortion-perception tradeoff is considered our theoretical analysis is considerably more challenging than the prior works on single-frame setting in references [11] and [16]. Please look at the global rebuttal, the part on “Novelty of Theoretical Analysis” for a brief review of our theoretical challenges in the work.
3) Experiments are limited to toy data.
Reply: Please see the global rebuttal, the part on “scope of experimental results”.
4) Causal setting
Reply: For video compression, there are two frameworks being considered in the current literature: causal and non-causal frameworks. For the non-causal framework, people often compress the whole video (or a block of multiple subsequent frames), as in the following reference
Habibian, Amirhossein, et al. "Video compression with rate-distortion autoencoders." Proceedings of the IEEE/CVF International Conference on Computer Vision. 2019.
We would like to emphasize that results from previous RDP tradeoffs for images should naturally hold for this setting (with PLF-JD metric), specifically the factor-of-two bound and universality property since we can consider the whole video as a whole 3D image in this sense. This setting, however, is not efficient in real-time applications due to the delay in block compression/decompression. Most recent neural video compression methods, on the other hand, often consider the causal setting.
We hope to have addressed all your concerns satisfactorily and kindly ask you to consider increasing the score.
---
Rebuttal Comment 1.1:
Comment: Thanks for the reply. I will stick to my rating, and hope future work will apply this to larger datasets.
---
Reply to Comment 1.1.1:
Title: Error Permanence on UVG Dataset
Comment: We would like to inform you that we have already obtained the results of
the experiments on the UVG dataset. According to them, the error
permanence phenomenon shows up in the JD reconstructions. That is, it
propagates the flaws in the color tone of the previous low-rate
reconstruction in the future ones. However, FMD reconstructions do not
suffer from such a problem. The distortions over 3900 UVG samples are given as
follows:
| | MMSE | PLF-FMD | PLF-JD
| :--- | :----: | ---: | --:|
| Distortion (MSE) | 0.0026 | 0.0032 | 0.0168
This confirms the error permanence phenomenon for PLF-JD.
Currently, we are not able to send you the visualization of our results
since according to the conference timeline, the deadline for uploading a
pdf file has passed. But, we are going to include these results in the new
version of our work | Summary: This paper examines the choice of perceptual loss for causal, low-latency video compression model with distortion-perception optimization. By using information theoretic analysis and deep-learning based experiments, the authors demonstrate that the choice of perceptual loss can have a significant effect on the decoded reconstruction, especially at low-bit rates.
Strengths: This paper demonstrates the proposed assertions through theoretical derivations and detailed experiments, which are convincing enough. In addition, these conclusions provide a good guideline basis for subjective optimization of learned video compression methods.
Weaknesses: Since this paper is oriented to the discussion of learned video compression, the experimental setup may be too simple to have generalized conclusions. For example, the used sequences (e.g., MNIST and KTH) for evaluation are too homogeneous to present the characteristics of videos. It should be better to have some comparisons on more complex sequences such as the UVG dataset. It is critical to illustrate whether these findings would be useful in practical applications.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: As mentioned above, it is suggested to have some experiments on sequences with more complex motion and coded bits to examine the proposed findings in addition to those toy examples in the submission.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: No.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank reviewer QNA6 for considering our work as “a good guideline basis for subjective optimization of learned video compression”. Please find your concern addressed below
1) Since this paper is oriented to the discussion of learned video compression, the experimental setup may be too simple to have generalized conclusions. For example, the used sequences (e.g., MNIST and KTH) for evaluation are too homogeneous to present the characteristics of videos. It should be better to have some comparisons on more complex sequences such as the UVG dataset. It is critical to illustrate whether these findings would be useful in practical applications.
Reply: The current empirical setup for comparing neural codecs often entails training the networks on the Youtube/Kinetics/Vimeo90k dataset (which can encompass nearly a million frames) and subsequently testing on high-resolution videos like UVG and MCL-JCV. While we are diligently working towards generating results on UVG/MCL-JCV, it may not be feasible to achieve this within the constraints of the rebuttal process. The scale of these datasets and the complex nature of GAN training make it challenging to complete within a limited timeframe.
We would like to emphasize that our work's major contribution lies in being the first to explore the impact of perception loss functions on the RDP tradeoff for video compression, with a strong theoretical focus and we use our experimental results to confirm our theoretical findings. Prior works on the rate-distortion-perception tradeoff, such as Blau and Michaeli (ICML 2019) [11] and Zhang et al. (NeurIPS 2021) [16], also presented empirical results on simpler image datasets (MNIST and SVHN) to demonstrate their theoretical findings.
While our experiments on two datasets MovingMNIST and KTH are homogeneous (lacking other effects such as zooming, camera moving, etc.). As it is illustrated in Fig. 1(b), Fig. 2(a) and Fig. 7 (in the Appendix), the error permanence phenomenon, associated with joint distribution-based discriminators in the low-rate regime, occurs in even these two homogeneous datasets. This observation strongly suggests that the same effect is likely to be more pronounced in complex non-homogeneous videos.
For other results, Table 1 validates the existence of a "factor of 2" distortion bound through the use of a frame-level discriminator, aligning seamlessly with Theorem 1. Additionally, Fig. 3 serves as a compelling testament to the universality of MMSE-based representations. Building upon their theoretical foundation, we believe these outcomes to extend their applicability to diverse datasets.
Finally, we note that these findings also come with applications. The universality aspect of MMSE representation emerges as crucial during the training of perceptual video compression models. Given that perceptual reconstruction frequently generates novel details not present in the source frame (especially when using the FMD-JD loss), compressing motion flow vectors between the current frame and the prior perceptual reconstruction necessitates a higher bit allocation compared to utilizing the MMSE representation. Therefore, this suggests that the recommended approach is to train end-to-end compression exclusively using the rate-distortion loss. Subsequently, users have the flexibility to opt for their preferred decoder based on their perceptual preferences.
We hope this response alleviates your concerns about the need to implement our experiments on more complex datasets. We also kindly request you to consider increasing the score.
---
Rebuttal Comment 1.1:
Comment: Thank you for your response. I will keep my rating.
---
Reply to Comment 1.1.1:
Title: Error Permanence on UVG Dataset
Comment: We would like to inform you that we have already obtained the results of
the experiments on the UVG dataset. According to them, the error
permanence phenomenon shows up in the JD reconstructions. That is, it
propagates the flaws in the color tone of the previous low-rate
reconstruction in the future ones. However, FMD reconstructions do not
suffer from such a problem. The distortions over 3900 UVG samples are given as
follows:
| | MMSE | PLF-FMD | PLF-JD
| :--- | :----: | ---: | --:|
| Distortion (MSE) | 0.0026 | 0.0032 | 0.0168
This confirms the error permanence phenomenon for PLF-JD.
Currently, we are not able to send you the visualization of our results
since according to the conference timeline, the deadline for uploading a
pdf file has passed. But, we are going to include these results in the new
version of our work | Summary: This paper systematically studies the rated-distortion-perception tradeoff in neural video compression. Since videos are made of consecutive frames, this paper considers two different perceptual metrics, including the joint distribution of all video frames (PLF-JD), and per-frame perceptual loss (framewise marginal distribution, PLF-FDM). Some conclusions are proposed regarding the choice of PLF-JD or PLF-FDM.
In addition, motivated by the universal encoded representations in previous RDP papers from the field of image compression, the authors in this paper demonstrate the video-version universal encoded representations hold when the perceptual constraint is PLF-FDM. While a similar result does not hold for PLF-JD in general, it is satisfied for a special class of encoders which operate in the low-rate regime. The universal encoded representations in video compression have several advantages for video compression such as when it needs to extract motion information from an MSE-based reconstruction.
Strengths: Originality and quality of this paper are pretty good. The PLF-JD and PLF-FDM are two different but both reasonable perceptual metrics used for optimization of video compression models. This is the first paper that studies the RDP issue in video compression and the results and conclusions are insightful. Some important conclusions regarding RDP include:
(1) There is a significant penalty in distortion when using PLF-JD in the low-rate regime.
(2) While PLF-JD preserves better temporal consistency across video frames, it suffers from the permanence of error phenomenon in which the mistakes in reconstructions propagate to future frames.
The universal encoded representations for video compression are studied as well, which are demonstrates to be always held when using PLF-FDM (similar to compressing images sequentially).
Besides, the effectiveness of RDP in video compression models is well verified by theoretical analysis in Gaussian-Markov sources.
Weaknesses: (1) (Perhaps not as a weakness) I am wondering whether these conclusions of RDP in video compression can be extended to distortion-perception tradeoff in other video generation / restoration tasks. The joint-frame loss PLF-JD and the per-frame loss PLF-FDM should hold similar conclusions for video generation.
(2) The state-of-the-art neural video compression framework contains a motion encoder/decoder and a residual encoder/decoder, which are effective for high-resolution videos. It seems this paper studies the other neural video compression framework that does not explicitly transmit motion information. How to apply the proposed theory to such motion-residual-separated video compression framework?
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: See the abovementioned “weaknesses”.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 3 good
Limitations: Limitations are discussed in the final section of Appendix. This paper is in the field of video compression which does not involve strong negative societal impact.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank Reviewer pcBB for considering our work “the first paper that studies the RDP issue in video compression” and praising its “originality and quality”. Please find your concerns addressed below.
1) (Perhaps not as a weakness) I am wondering whether these conclusions of RDP in video compression can be extended to the distortion-perception tradeoffs in other video generation/restoration tasks. The joint-frame loss PLF-JD and the per-frame loss PLF-FDM should hold similar conclusions for video generation.
Reply: Thank you for pointing this out. Indeed, the tasks of restoration can be interpreted as a video compression problem. For the causal video restoration task, the incoming corrupted frame can be considered as a message $M_i$ that the decoder needs to decode (restore) and the encoder simply sends $M_i$ by corrupting the “clean” source image $X_i$. As such the factor-of-two bound should hold for the PLF-FMD metric (following [1], which considers the image restoration setting), but not for the PLF-JD metric, by following the same arguments presented in our paper.
In the context of video generation, recent SOTA methods (StyleGAN-V, MoCoGAN) frequently adopt implicit generation approaches, such as GANs, that bypass distortion loss. In this context, PLF-JD appears as a generally preferable choice, given its emphasis on temporal consistency. Within the VAE framework, methods like He et al.'s "Probabilistic Video Generation using Holistic Attribute Control" (ECCV 2018), often incorporate a distortion term. This category of models frequently produces blurry videos. Consequently, we posit that incorporating a PLF-FMD term could enhance the generation of realistic frames while sacrificing temporal consistency. Conversely, utilizing PLF-JD might introduce training challenges (as elaborated in the "Significance of Results" section in the rebuttals). We believe this is an interesting question for future research.
For non-causal video generation/restoration, where a video is treated as a 3D image, PLF-JD should be a better metric overall since previous results for RDP image compression (factor of 2 bound, universality) should hold with the PLF-JD metric. This is because the proof from previous works does not make assumptions on the dimensionality of the input. As such, the implication for the non-causal case is more straightforward (similar to results for images). In fact, this is also the message that we want to send to the community, that theoretical results for rate-distortion-perception depend heavily on the coding scheme one is using.
2) The state-of-the-art neural video compression framework contains a motion encoder/decoder and a residual encoder/decoder, which are effective for high-resolution videos. It seems this paper studies the other neural video compression framework that does not explicitly transmit motion information. How to apply the proposed theory to such motion-residual-separated video compression framework?
Reply: On the theoretical side, the current framework can be adapted to incorporate the motion-residual-separated architecture. At each step, the source includes both motion and residual contents. The message that is sent to the decoder can contain two parts where one of them compresses the motion content while the other one is the compressed version of the residual content. The operational RDP region is defined similarly to Definition 2. So, all the previous discussions hold with a simple redefinition of the random variables.
On the experimental side, many of the current state-of-the-art architectures do indeed contain a motion encoder/decoder and residual encoder/decoder. This is also replicated in our experiment section, where the scale-space-flow module is responsible for computing the optical flow field and warping (see ref [32] of our paper), and the conditional module for efficient residual compression (see ref [4] of our paper). Our proposed theory is general and applicable to this architectural design, which has been illustrated in the experiment section.
---
Rebuttal Comment 1.1:
Comment: Thanks for your responses. They address my previous concerns well. I will keep my rating as accept. | Rebuttal 1:
Rebuttal: We thank the reviewers for acknowledging the significance of the work. Reviewer pcBB notes that our work is the first paper that studies the RDP issue in video compression and finds the results and conclusions to be insightful. Similarly reviewers QNA6, txFk and 1Qbf have noted the significance of our results and conclusions to learned video compression. Reviewers QNA6 and txFk have also appreciated the quality of the presentation. In our rebuttal, we have taken into account the feedback provided by the reviewers and included some additional experimental results involving LPIPS score as suggested by the reviewers (see pdf file below). We note that the conclusions are largely consistent with the original results in the paper. We provide some general comments that address the concerns raised by the reviewers.
**Scope of experimental results:** Some of the prior experimentally oriented papers in neural video compression involve extensive training (Youtube/Kinetics) and complex testing (UVG) to validate the use of new metrics or neural architectures. Our work's major contribution lies in being the first to explore the impact of perception loss functions (PLFs) on the rate-distortion-perception (RDP) tradeoff, with a strong theoretical focus. The purpose of experiments is to provide qualitative visualization of our theoretical results using simpler datasets. For example in Fig. 1(b), Fig. 2(a) and Fig. 7 (in the Appendix) we demonstrate the error permanence phenomenon in the low-rate regime associated with a discriminator network that considers the joint distribution across video frames. Since this observation is inherent to the nature of the PLF as verified by our theoretical result (Thm 5), we do not expect this property to be specific to our choice of datasets and can be generalized to other datasets. Similarly Table 1 demonstrates the “factor of 2” bound in distortion when a frame-level discriminator is used during training consistent with Theorem 1 in the paper. Likewise Fig. 3 demonstrates the sufficiency of MMSE representations for achieving near optimal performance. Since all these results are grounded in our theoretical analysis we again expect other datasets to yield similar conclusions. Note that prior works that provided theoretical study of the RDP tradeoff, such as the works of Blau and Michaeli (ICML 2019) [11] and Zhang et al (NeurIPS 2021) [16] also demonstrated their empirical results on simpler datasets such as MNIST and SVHN. Based on these considerations we request the reviewers who have provided us lower scores simply because the experimental results did not include additional datasets reconsider their evaluation. We also note that while we are making every effort to produce results on additional datasets, it may not be practically feasible for us to finish these results during the rebuttal process as these datasets are extremely large and the GAN training may not complete in such a short timeframe. Finally, we added experimental results with LPIPS score (requested by reviewer 1Qbf) and they are consistent with our previous observations (see the pdf file).
**Significance of Results:** We establish that encoded representations designed to minimize the MSE distortion loss can suffice to achieve near optimal distortion-perception tradeoff for both the joint and per-frame perception loss constraints. Our work is the first one to introduce such a principle of universality in the context of learned video compression previously studied in the context of image compression [16, 25]. In fact we believe that universality of MSE representations is far more significant in the context of learned video compression. Given that perceptual reconstruction frequently generates novel details not present in the source frame, compressing motion flow vectors between the current frame and the prior perceptual reconstruction necessitates a higher bit allocation compared to utilizing the MMSE representation. Hence, a recommended approach is to train end-to-end compression exclusively to minimize MSE and use the method in the proposed work to achieve a (near-optimal) tradeoff between the distortion and perception losses as desired by the user. We also provide insightful analysis using the framework of RDP to compare two commonly used PLFs. We demonstrate (using theoretical analysis and experimental results) that the PLF using a joint distribution constraint can be overly restrictive at low compression rates and can lead to an undesired error permanence phenomenon. In contrast, PLFs based on per frame metric can have enough flexibility to perform desirable corrections in the reconstruction frames as more information is available to the decoder and may be preferred at low bit rates. However, in some other rate regimes, the choice of PLF-JD is preferred over PLF-FMD (see Table 2 of Appendix F and experimental results of Appendix J.5).
**Novelty of Theoretical Analysis:** The study of RDP region for learned video compression is considerably more challenging than the study of RDP function for a single frame in prior works. As established in Thm 3, the RDP region (for first-order Markov sources) involves a tradeoff between the compression rate assigned to each frame. Even for Gaussian sources, the RDP region does not have a simple closed form which makes the proof of optimality of Gaussian reconstructions (Thm 4) more involved. As a result a significant effort was devoted towards obtaining insights in various operating regimes. We summarized the main results in Table 2 in Appendix F and in Thm 5. Furthermore the proof of universality in Thm 6 (for Gaussian sources) is considerably more challenging as one has to consider the achievability of the entire RDP region as opposed to just points on the boundary of RDP function in [16]. Finally the results on the fixed-encoder setup are more general than prior works that required a characterization of the information RDP region, which we do not require.
Pdf: /pdf/b5f6d5fa735ac036c91ce35e7a01f16ff5fbe2ae.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
ConDaFormer: Disassembled Transformer with Local Structure Enhancement for 3D Point Cloud Understanding | Accept (poster) | Summary: This paper proposed a network for point cloud understanding. The basic idea is to process point clouds in 3 orthogonal 2D planes (triplanes). This can largely reduce the computation cost compared to processing point clouds in 3D space. Experiments on several tasks (segmentations and detection) are conducted to show the effectiveness of this method.
Strengths: This paper showed an insight of point cloud processing: when we are dealing with data of large dimensionality, it is better that we can first project the data into a smaller space. The figures nicely illustrated the core idea. Even though the triplane-style network has been used in many other works, I still believe the paper did a great job in designing the backbone network. Some experiments on segmentation and detection can also show the potential usage of the proposed network.
Overall, I believe the paper delivered a great idea in designing 3D networks and some good results. My major concern is about the experiement part (see the weakness section).
Weaknesses: 1. As a general backbone network for point cloud understanding. Some experiments are missing. For example, object-level classification (ShapeNet, ModelNet, ScanObjectNN), object-level segmentation (ShapeNetPart, PartNet).
2. As mentioned above, the triplane-style network has been used in some prior works (e.g., [1]).
3. A minor issue, it is difficult to understand the notations of attention matrix for 3 planes (-, ~ and ^). I know because of the superscripts and subscripts, the authors had to choose (-, ~ and ^) to denote different planes. But maybe we can find something better.
[1] Efficient Geometry-aware 3D Generative Adversarial Networks
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: See the weakness section.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely thank you for your time and constructive comments. We are encouraged by your positive comments on our method (great idea in designing 3D networks, potential usage of the proposed network). In the following, we address your concerns carefully.
**Q1: Missing experiments on object-level classification and object-level segmentation.**
A: Thank you for pointing out this issue. During the submission, we did not conduct experiments on object-level tasks. It is because this work mainly aims to deal with large-scale point clouds that have more points requiring higher computational costs. Thus, we neglected to experiment on object-level point clouds.
Here, with your suggestions and our interest in the performance on small-scale point clouds, we conduct experiments on ModelNet and ShapeNetPart and the results are listed below. We can find that for small-scale point cloud data, our method still achieves comparable or even better performance in comparison with Point Transformer v1, Point Transformer v2, and Stratified Transformer. We will add these results in the final version.
Shape classification results on ModelNet40:
| Method | mAcc (%) | OA (%) |
|:----------|:----------:|:-------:|
| PointNet | 86.0 | 89.2 |
| PointNet++ | - | 91.9 |
| Point Transformer v1 | 90.6 | 93.7 |
| Point Transformer v2 | 91.6 | 94.2 |
| PointNeXt | 90.8 | 93.2 |
| ConDaFormer | 90.8 | 94.0 |
Part segmentation results on ShapeNet-Part:
| Method | cls. mIoU | ins. mIoU |
|:----------|:----------:|:-------:|
| PointNet | 80.4 | 83.7 |
| PointNet++ | 81.9 | 85.1 |
| Point Transformer v1 | 83.7 | 86.6 |
| Stratified Transformer | 85.1 | 86.6 |
| PointNeXt | 85.2 | 87.0 |
| ConDaFormer | 84.9 | 86.8 |
---
**Q2: The triplane-style network has been used in some prior works.**
A: Thank you for providing an interesting work using the triplane-style strategy. EG3D [1] aims at representing the intermediate 3D volume in 3D generation into three planes to reduce the memory. For each 3D point, we can get its features by projecting it to the three planes and then summing the queried three feature vectors. Although EG3D and ours both use the triplane to reduce the computational costs, EG3D mainly focuses on the more efficient representation (2D planes instead of 3D dense volumes) to capture greater details by using higher resolution features, while ours focuses on reducing the involved points in self-attention to enlarge the attention range. Therefore, we think our method is different from EG3D. According to your suggestion, we will add the discussion to the final version.
[1] Chan et al. Efficient geometry-aware 3D generative adversarial networks. CVPR 2022.
---
**Q3: The notations of attention matrix are difficult to understand.**
A: Thank you for your suggestion. We are considering placing the plane indicator to the subscript, such as $\bar{Q}^h_t$ to $Q^h_{xy,t}$ and $\bar{Attn}$ to $Attn_{xy}$. We would appreciate it if you could further give your feedback.
---
We hope our response adequately addresses your concerns. If you still have any questions, we are looking forward to hearing them.
---
Rebuttal Comment 1.1:
Comment: Thanks for the clarification and the new results. I believe this paper proposed an interesting method and showed some good results.
---
Reply to Comment 1.1.1:
Comment: We sincerely thank you again for your time and constructive suggestions. We are encouraged by your recognition of our method and our responses. We will improve our paper's quality based on your guidance and comments. | Summary: This paper proposes a new window partitioning method, which can save a lot of computation costs by sacrificing a small amount of precision. At the same time, it proposes a kind of depth wise sparse convolution, which can better capture local structure by using before and after self attention. The experimental results fully prove the effectiveness of the method
Strengths: 1. This paper innovatively puts forward a new window partitioning method, which divides 3D cubic into 3 2D planes and then divides the Windows. This method can save a lot of calculation costsThis paper presents a new regularization method to optimize networks by predicting relative position differences
2. By using depth wise sparse convolution to capture local structures, the experimental results fully demonstrate the effectiveness of this design
3. The content is sufficient and the experimental results are abundant
Weaknesses: 1. The interaction between different planes is only additive operation, which may lead to the loss of 3D structure information,
2. More relevant work on window partitioning methods is basically not discussed and compared
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. I think a better interaction between the planes might improve task performance even more
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: limitations have benn discussed in the paper
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely thank you for your time and constructive comments. We are encouraged by your positive comments on our method (innovative, effective) and experiments (abundant). In the following, we address your concerns carefully.
**Q1: The interaction between different planes.**
A: Thank you for pointing out this issue. But, we have to clarify that the interaction between different planes is not a summantion but a concatenation ($\bigoplus$) in Eq. 4 and Eq. 6, as stated in L161. We apologize for the misunderstanding.
In addition, although using the concatenation, the loss of 3D structure information might exist, as you pointed out. In fact, we alleviated the issue by introducing depth-wise sparse convolution (DSConv). Specifically, before the attention, we apply DSConv to aggregate more contextual information. After the self-attention within each plane, we also propagate the updated feature of each point to its local neighbors with another DSConv operation, which enables information exchange between different planes. We appreciate your suggestion to further improve the interaction between planes, which we will explore in future work.
---
**Q2: Relevant work on window partitioning methods.**
A: In the Related work of the submission, we briefly reviewed some works on window partitioning in 2D transformers, such as Swin Transformer [1], Ccnet [2], Axial-Attention [3], and CSWin Transformer [4], and several related works in 3D, including Stratified Transformer, Swin3D, and OctFormer. Here, we discuss more related works on 3D and will incorporate them into the final version.
Inspired by Swin Transformer, both the Stratified Transformer[5] and Swin3D[6] partition the 3D space into non-overlapping 3D cubic windows to perform local self-attention. SST [7] first projects 3D point cloud into Bird's Eye View (BEV) space and then splits the space into non-overlapping 2D square windows. To avoid the expensive cost of window partitioning and padding due to inconsistent token counts within each window in SST, FlatFormer[8] partitions the 3D point cloud into groups of equal sizes using axis sorting, leading to improved computational regularity.
[1] Liu et al. Swin transformer: Hierarchical vision transformer using shifted windows. ICCV 2021.
[2] Huang et al. Ccnet: Criss-cross attention for semantic segmentation. ICCV 2019.
[3] Wang et al. Axial-deeplab: Stand-alone axial-attention for panoptic segmentation. ECCV 2020.
[4] Dong et al. Cswin transformer: A general vision transformer backbone with cross-shaped windows. CVPR 2022.
[5] Lai et al. Stratified transformer for 3d point cloud segmentation. CVPR 2022.
[6] Yang et al. Swin3D: A pretrained transformer backbone for 3d indoor scene understanding. arXiv:2304.06906 (2023).
[7] Fan et al. Embracing single stride 3d object detector with sparse transformer. CVPR 2022.
[8] Liu et al. FlatFormer: Flattened window attention for efficient point cloud transformer. CVPR 2023.
---
We hope our response adequately addresses your concerns. If you still have any questions, we are looking forward to hearing them.
---
Rebuttal Comment 1.1:
Title: Thanks for the reply
Comment: Thanks to the author's thoughtful response, I feel that my questions have been mostly resolved, and I will maintain my rather positive rating
---
Reply to Comment 1.1.1:
Comment: We sincerely thank you again for your time and constructive suggestions. We are encouraged by your recognition of our method and our responses. We will improve our paper's quality based on your guidance and comments. | Summary: This paper studies point cloud understanding. They propose ConDaFormer, a novel Transformer architecture for 3D point cloud. Specifically, ConDaFormer disassembles the cubic window into three orthogonal 2D planes, leading to fewer points when modeling the attention in a similar range. Together with local sparse convolutions, ConDaFormer can capture both long-range contextual information and local priors. They evaluate their method on point cloud detection and segmentation datasets and achieve good performance.
Strengths: - Their method has substantial improvements over the existing point cloud Transformers.
- The proposed architecture is simple and effective. The method has sufficient novelty.
- They evaluate their approach on widely-used point cloud datasets and achieve satisfactory results.
- Paper writing is good and easy to follow.
Weaknesses: - The proposed model has some similarities with FlatFormer [1], which groups the patches by axis sorting. The axis sorting is similar to disassembled window attention proposed in this paper. The authors need to clarify the difference with the existing works.
- The proposed model failed to outperform the SoTA detectors on SUN-RGBD.
[1] Liu et al. FlatFormer: Flattened Window Attention for Efficient Point Cloud Transformer.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Please refer to the weaknesses.
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Please refer to the weaknesses.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely thank you for your time and constructive comments. We are encouraged by your positive comments on our method (novel, effective, substantial improvements over the existing point cloud Transformers) and the writing. In the following, we address your concerns carefully.
**Q1: The similarities with FlatFormer.**
A: Thank you for your suggestion for discussing the differences between ours and FlatFormer.
Inspired by Swin Transformer [1], SST [2] first projects 3D point cloud into Bird's Eye View (BEV) space and then splits the space into non-overlapping 2D square windows to perform self-attention. FlatFormer [3] proposes flattened window attention mainly to solve the problem of inconsistent token counts within each window in SST and then to achieve parallel processing on the GPU. FlatFormer does not partition the window into planes and instead it first voxelizes the point cloud into sparse Bird's Eye View (BEV) pillars and then re-orders these pillars in the BEV space. Both the motivation and designs are different from ours. In comparison with them, we first voxelize 3D point cloud to voxels and then process the voxels with our proposed disassembled window attention module to enlarge the attention range with minimal additional computational costs.
We will add the discussion in the final version.
[1] Liu et al. Swin transformer: Hierarchical vision transformer using shifted windows. ICCV 2021.
[2] Fan et al. Embracing single stride 3d object detector with sparse transformer. CVPR 2022.
[3] Liu et al. FlatFormer: Flattened window attention for efficient point cloud transformer. CVPR 2023.
---
**Q2: The proposed model failed to outperform the SoTA detectors on SUN-RGBD.**
A: Thank you for pointing out this issue. To validate the effectiveness of our ConDaFormer as a generalized 3D backbone, we selected a simple 3D object detection method, FCAF3D, as our baseline and did not make elaborate adjustments to our method to improve the performance. To address your concern, we further take the SoTA detector, CAGroup3D (Table 5 in the sumission), as our baseline and conducted experiments on SUN RGB-D dataset. The best and average (in bracket) performance are listed below. As a generalized 3D backbone, ConDaFormer's potential usage is demonstrated by its comparable performance to FCAF3D and CAGroup3D. We will add this new comparison in the final version to further show the capacity of our method.
Detection results on SUN RGB-D:
| Method | mAP\@0.25 | mAP\@0.50 |
|----------|:----------:|:-------:|
| CAGroup3D | 66.8 (66.4) | 50.2 (49.5) |
| ConDaFormer | 67.1 (66.8) | 49.9 (49.5) |
---
We hope our response adequately addresses your concerns. If you still have any questions, we are looking forward to hearing them.
---
Rebuttal Comment 1.1:
Comment: Thanks for the reply. The reviewers have addressed my concerns so I would keep my rating and recommend this paper for acceptance.
---
Reply to Comment 1.1.1:
Comment: We sincerely thank you again for your time and constructive suggestions. We are encouraged by your recognition of our method and our responses. We will improve our paper's quality based on your guidance and comments. | Summary: In this paper, CondaFormer, an innovative 3D transformer architecture, is presented. It ingeniously dissects the cubic window into three orthogonal 2D planes and incorporates a local structure enhancement strategy that uses depth-wise convolutions to capture local geometric information. Through rigorous experiments on point cloud semantic segmentation and 3D detection, CondaFormer's efficacy is demonstrably validated.
Strengths: 1. The rationale behind dissecting the cubic window into tri-planes is well-articulated and has a solid foundation. This approach considerably reduces computational cost by limiting query-key pairs.
2. CondaFormer showcases remarkable improvements in performance in the context of semantic segmentation, as evidenced by the results.
3. A series of comprehensive ablation studies are conducted, providing evidence for the effectiveness of the window disassembly design, local structure enhancement, and the impact of hyper-parameter choices.
4. The paper is well-structured and clearly presented, making it accessible and easy to follow.
Weaknesses: 1. The overall concept, though practical, does not break new ground in terms of novelty. The disassembly of 3D windows into 2D planes can be seen as a straightforward adaptation of the Axial Transformer[1], which similarly disassembles 2D windows into 1D axis attention.
([1] Ho J, Kalchbrenner N, Weissenborn D, et al. Axial attention in multidimensional transformers[J]. arXiv preprint arXiv:1912.12180, 2019.)
2. The performance in 3D object detection leaves room for improvement, as CondaFormer does not exhibit a substantial edge over the baseline FCAF3D.
3. It is recommended that the authors further validate the model's performance and robustness by conducting additional experiments on outdoor perception tasks such as 3D object detection or segmentation on datasets like KITTI or Waymo.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: see the strengths and weaknesses.
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: NA
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely thank you for your time and constructive comments. We are encouraged by your positive comments on the rationale behind our method, the improvement in segmentation, ablations (comprehensive), and presentation.
In the following, we address all your concerns carefully.
**Q1: The overall concept, though practical, does not break new ground in terms of novelty.**
A: We first sincerely appreciate your recognition of the inspiration behind the disassembly of 3D windows into 2D planes from 2D window attention.
Regarding your concern about the disassembly operation, we agree with you that some similar strategies have been explored by previous works on 2D transformers. As mentioned in L52 of the submission, we also indicated that our idea is indeed inspired by CSWin. In L87-89, we also list several works using similar window partitioning strategies, including Axial-Attention.
In this paper, we explore such a strategy for transformer in 3D point cloud understanding to enlarge the range of attention without increasing the computational costs. But, we would like to explain that our work **does not just bring the design in CSWin or Axial-Attention into the 3D transformer**.
**As shown in Table 7, while reducing the computational costs, such a straightforward adaptation inevitably causes performance degradation due to fewer contexts to be modeled.
We thus introduce the depth-wise sparse convolution within the disassembled window attention module to enhance the local structure representation. Moreover, we also make detailed ablations to analyze the components involved in the block.**
In fact, applying strategies inspired by some advanced techniques in 2D to the 3D community has been explored. For example, Stratified Transformer motivated by the SWin Transformer extends the non-overlapped 2D window partitioning strategy to the transformer in the 3D point cloud. To capture long-range contexts, it introduces a stratified sampling strategy. This method has become an important backbone for point cloud understanding. We hope our study in the 3D window disassembly strategy could motivate more explorations for 3D point cloud transformers.
---
**Q2: The performance in 3D object detection leaves room for improvement.**
A: Thank you for pointing out this issue. To validate the effectiveness of our ConDaFormer as a generalized 3D backbone, we selected a simple 3D object detection method, FCAF3D, as our baseline and did not make elaborate adjustments to our method to improve the performance. To address your concern, we further take the SoTA detector, CAGroup3D (Table 5 in the sumission), as our baseline and conducted experiments on SUN RGB-D dataset. The best and average (in bracket) performance are listed below. As a generalized 3D backbone, ConDaFormer's potential usage is demonstrated by its comparable performance to FCAF3D and CAGroup3D. We will add this new comparison in the final version to further show the capacity of our method.
Detection results on SUN RGB-D:
| Method | mAP\@0.25 | mAP\@0.50 |
|----------|:----------:|:-------:|
| CAGroup3D | 66.8 (66.4) | 50.2 (49.5) |
| ConDaFormer | 67.1 (66.8) | 49.9 (49.5) |
---
**Q3: Additional experiments on outdoor perception tasks.**
A: Thank you for your constructive suggestions. In response to your recommendation, we conducted experiments on SemanticKitti for 3D semantic segmentation and nuScenes for both 3D semantic segmentation and 3D object detection. The detailed results are listed below.
Specifically, for 3D semantic segmentation, ConDaFormer achieves 72.0% mIoU and 79.9% mIoU on the test set of the SemanticKitti and the validation set of nuScenes datasets, respectively. Obviously, ConDaFormer outperforms most prior methods designed specifically for outdoor LiDAR data, such as Cylinder3D [3] and RPVNet [4], and performs only slightly worse than 2DPASS [5], which utilizes additional image information.
For 3D object detection, we choose TransFusion-L [7] as the baseline model and replace the backbone with our ConDaFormer. ConDaFormer achieves 68.5% NDS and 63.0% mAP on the validation set of the nuScenes dataset.
It is worth noting that for point cloud understanding backbone design (not aimed at outdoor LiDAR data), previous works (such as Stratified Transformer and Point Transformer) did not conduct experiments on outdoor LiDAR data.
Semantic segmentation results on SemanticKITTI test set:
| Method | mIoU |
|:----------|:----------:|
| KPConv [1] | 58.8 |
| SPVNAS [2] | 67.0 |
| Cylinder3D [3] | 68.9 |
| RPVNet [4] | 70.3 |
| 2DPASS [5] | 72.9 |
| ConDaFormer | 72.0 |
Semantic segmentation results on nuScenes val set:
| Method | mIoU |
|:----------|:----------:|
| Cylinder3D [3] | 76.1 |
| PVKD [6] | 76.0 |
| RPVNet [4] | 77.6 |
| 2DPASS [5] | 79.4 |
| ConDaFormer | 79.9 |
Object detection results on nuScenes val set:
| Method | NDS | mAP |
|:----------|:----------:|:----------:|
| TransFusion-L [7] | 68.48 | 63.07 |
| ConDaFormer | 68.54 | 62.95 |
[1] Thomas, et al. "Kpconv: Flexible and deformable convolution for point clouds." ICCV 2019.
[2] Tang, et al. "Searching efficient 3d architectures with sparse point-voxel convolution." ECCV 2020.
[3] Zhu, et al. "Cylindrical and asymmetrical 3d convolution networks for lidar segmentation." CVPR 2021.
[4] Xu, et al. "Rpvnet: A deep and efficient range-point-voxel fusion network for lidar point cloud segmentation." ICCV 2021.
[5] Yan, et al. "2dpass: 2d priors assisted semantic segmentation on lidar point clouds." ECCV 2022.
[6] Hou, et al. "Point-to-Voxel Knowledge Distillation for LiDAR Semantic Segmentation." CVPR 2022.
[7] Bai, et al. "TransFusion: Robust LiDAR-Camera Fusion for 3D Object Detection with Transformers." CVPR 2022.
---
We hope our response adequately addresses your concerns. If you still have any questions, we are looking forward to hearing them.
---
Rebuttal Comment 1.1:
Comment: The authors have conducted additional experiments that provide an effective rebuttal to my previous concerns. The new results demonstrate that CondaFormer outperforms the previous state-of-the-art CAGroup3D method on the SUN RGB-D dataset and improves the outdoor perception methods. This addresses my main criticism about the strength of the empirical results. I am willing to increase my score to be borderline accept.
---
Reply to Comment 1.1.1:
Comment: We sincerely thank you again for your time and constructive suggestions. We are genuinely delighted that our response addressed your concerns and also encouraged by your recognition of our method. We will refine our final version based on your guidance and comments. | null | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: Recent advancements in 3D point cloud understanding have explored the use of Transformers, resulting in notable progress. However, the computational demands of applying global self-attention to large point cloud datasets, which contain over 0.1 million points, present a significant challenge. To mitigate this issue, researchers have proposed using Transformers within local regions, such as spherical or cubic windows. Nevertheless, these approaches still involve a considerable number of Query-Key pairs, leading to high computational costs. Moreover, previous methods often neglect the local 3D geometric structure by employing linear projections to learn the query, key, and value.
In this paper, a new transformer block named ConDaFormer is introduced to address these challenges while also considering the local geometry prior. ConDaFormer decomposes the cubic window into three orthogonal 2D planes, reducing the number of points involved in attention modeling within a similar range. Although this disassembling operation sacrifices some contextual information, a local structure enhancement strategy is implemented using depth-wise convolutions before and after the attention step. This strategy effectively captures local geometric information.
By leveraging these innovative designs, ConDaFormer is capable of capturing both long-range contextual information and local priors. Experimental results on various benchmarks for 3D point cloud understanding demonstrate the effectiveness of ConDaFormer.
Strengths: (1) The authors propose a novel disassembled window attention module for 3D point cloud semantic segmentation by disassembling the 3D window into three orthogonal planes for self-attention. This strategy effectively reduces computational overhead with negligible performance decrease.
(2) To enhance the modeling of local features, the authors introduce depth-wise sparse convolution within the disassembled window attention module. This combination of self-attention and convolution provides a comprehensive solution for capturing both long-range contextual information and local priors in 3D point cloud data.
(3) Experiments show that our method achieves state-of-the-art performance on widely adopted large scale 3D semantic segmentation benchmarks and comparable performance in 3D object detection task. Extensive ablation studies also verify the effectiveness of the proposed components.
Weaknesses: (1) In Table 1, why is there no results on test set for ConDaFormer?
(2) In Table 5, you said "in comparison with FCAF3D, our method achieves comparable performance but performs more steadily.". What do you mean by more steadily? Do you have any experimental results to support that?
(3) In Table 7, why don't try the window size smaller than 0.16m?
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: I'm positive about this paper. I really like the idea to disassemble 3D cubic window into three orthogonal planes for self-attention. It can reduce the computational cost. However, I still have some questions about the experimental results. Please see the Weaknesses and respond to those questions. Thank you.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: This paper still have some limitations when using larger attention window size. If the authors enlarge the window size from 0.32m to 0.48m, the training loss drops from around 0.52 to around 0.47 while the mIoU does not increase on the S3DIS dataset.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely thank you for your time and constructive comments. We are encouraged by your positive comments on our method and experiments (novelty and effectiveness). In the following, we address your concerns carefully.
**Q1: Why is there no result on test set for ConDaFormer in Table 1**
A: In Table 1, we provided the results of ConDaFormer without the test-time augmentation (TTA) on both validation and test sets of ScanNet but reported the performance with TTA (marked by a Star) only on the validation set. We would like to apologize for that. Here we explain the reason and hope to clarify this.
Following most of previous works, we first did not use TTA technique for model evaluation and reported the performance on both validation and test sets of ScanNet (stated in L260-263). However, as Point Transformer v2 employs TTA, for fair a comparison, we further evaluated our method with TTA on the validation set. Unfortunately, due to the submission rule of ScanNet benchmark, we only reported the performance on the validation set (marked by a Star) and submitted the results on the test set to the benchmark server after the paper submission deadline. We **got 75.5% mIoU, surpassing Point Transformer v2's 75.2% mIoU**. We would like to apologize for that again and will include the test set result in the revised paper.
---
**Q2: Comparison with FCAF3D**
A: Following previous works, we ran ConDaFormer 5 times to reduce the impact caused by random sampling and provided the best and average (in bracket) performance in Table 5 (as we stated in L287-288).
The reason that we think our model performs more steadily than FCAF3D is that the best score and average score on mAP\@0.25 and mAP\@0.50 are very close (64.9 v.s. 64.7, 48.8 v.s. 48.5), while FCAF3D got (64.2 v.s. 63.8) and (48.9 v.s. 48.2).
Thank you for pointing out this issue. We will make this explanation clearer in the revised paper.
---
**Q3: In Table 7, why don't try the window size smaller than 0.16m?**
A: If the voxel size is set to 0.04m, when the window size is less than 0.16m, we think the area of window attention is too small, resulting in a limited receptive field of the network. We have experimented with a window size of 0.08m on the Cubic window and got 67.7% mIoU, significantly worse than the result of 69.9% mIoU obtained with a window size of 0.16m. Thank you for pointing out this issue. We will include this information in the revised version to provide a comprehensive understanding of the window size's impact on performance.
---
We hope our response adequately addresses your concerns. If you still have any questions, we are looking forward to hearing them.
---
Rebuttal Comment 1.1:
Title: Keep my current rating
Comment: Thanks for the author's rebuttal. It resolved all of my concerns and I'll keep my rating for weak accept.
---
Reply to Comment 1.1.1:
Comment: We sincerely thank you again for your time and constructive suggestions. We are encouraged by your recognition of our method and our responses. We will improve our paper's quality based on your guidance and comments. | null | null | null | null | null | null |
On Learning Necessary and Sufficient Causal Graphs | Accept (spotlight) | Summary: The paper introduces a novel method to identify causally relevant variables with respect to a specific target node. Leveraging the concept of probabilities of causation, the authors propose an approach that efficiently and systematically identifies a subgraph containing the relevant ancestors of the target node. This method has been evaluated using both artificial and real-world datasets.
Strengths: * The paper is well written and motivated. In particular, the problem setting is nicely introduced.
* Comprehensive mathematical definitions and explanations.
* The method is relevant for different causal inference problems.
See the Question section for further remarks.
Weaknesses: * The paper has a slightly confusing mixture of the potential outcome (PO) framework (for instance, using A1 and A2) and the graphical causal model framework. The notation from the PO framework doesn't seem essential here, especially considering that the majority of the work is based on graphical causal models.
* It seems there is an implicit assumption that the target variable of interest is a leaf node in the graph (i.e., it has no descendants). While this aligns with the typical PO causal effect estimation setup, the proposed method could also be beneficial for other causal inference questions beyond effect estimation.
* Unclear if the proposed method is scale invariant, which would be an important property to avoid arbitrary changes in the result due to (often) unknown rescalings. This concern mostly stems from the references to NOTEARS, which is not scale invariant (see “Unsuitability of NOTEARS for Causal Graph Discovery” by Kaiser et al.). This could be discussed more clearly.
* Although the empirical results are convincing, the number of baseline methods is quite limited. Here, more and newer causal discovery algorithms could be included in the evaluation.
See the Question section for additional points and remarks.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: A few general remarks and questions:
* The abstract is excellent, being very concise, nicely motivating the problem, and describing the goal of the paper.
* The title could be more precise in mentioning that it pertains to a specific target variable, especially since causal discovery algorithms typically don’t have that (simplifying) restriction.
* The beer and diapers example is engaging, but it implies that we either do not observe the hidden confounder or would incorrectly learn a model that includes "diapers" to predict "beer". Is this meant as an example of including unnecessarily many variables or as an example where causal discovery fails due to hidden confounders?
* Related to the previous point, it appears that the authors have a typical PO setting of "treatment, covariates, and outcome" in mind. I might have missed this detail, but for instance, it seems (implicitly) assumed that no variable is a causal child of the target. The general setting could be introduced more clearly.
* The scoring method appears to target effect estimation tasks. If that is indeed the causal inference question you're aiming for, this should be discussed more explicitly. Generally, the proposed methods might also be interesting for other causal questions, such as contribution or root cause analysis.
* The works “Quantifying causal influences” and “Quantifying intrinsic causal contributions via structure-preserving interventions” by Janzing et al. might provide an interesting alternative for more general (and non-linear) measures for quantifying causal influences.
* The limitation to a discrete target variable is relatively stringent. Perhaps the previously mentioned work could inspire ideas on how to generalize this further in follow-up research.
* A minor point: The edges D_X are not formally introduced in detail.
* In Definition 3.1, does this only include direct parents and grandparents but not all other (potentially further) ancestors?
* In the discussion of Example 4.7, it's somewhat confusing why one would include X_D in the first place if the graph is known. And if it is unknown, why would one even "blindly" include all variables in the model, considering that there could also be variables that are causal children of the target? Again, this seems to implicitly assume that the target node of interest is a leaf node.
* Section 5 is insightful, effectively blending the mathematical details with the overall procedure.
* It's unclear if your method only works for linear relationships/linear structural causal models (SCMs). Could you provide clarification?
* Overall, the experimental results are compelling, but the selection of baseline methods could be broader given the significant advances in recent causal discovery approaches.
--Update after rebuttal--
I have read the rebuttal and further discussed with the authors. Since there are several smaller points that need to be revised in the final version, I stick to my initial (positive) score.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: There are no concerns regarding societal impact. For technical limitations, refer to the points raised in the Questions and Weakness sections.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your valuable comments and suggestions! We are honored by the reviewers’ recognition of our well-motivated and nicely introduced setting, comprehensive mathematical definitions/explanations, and utilities of our method. We have diligently addressed all your questions and comments. Below, we summarize your questions and comments in quotes, followed by our point-by-point responses. Please refer to **the one-page PDF in the general response (GR)** for all additional simulation/real-data results.
1. > Mixture of the potential outcome (PO) and the graphical causal model
**Re:** Thanks for this keen comment. We use the notations of the PO framework intentionally to interconnect POCs and causal effects. We first employed the PO framework to construct the POCs in Section 4.1 for a generalized multi-variable setting within a causal graph. Then we introduce causal effects in Section 4.2, grounding these concepts in the same PO framework, and establish the theoretical relationship between these two sets of concepts. In Section 5, we integrate the PO and graphical causal model frameworks by employing causal effects as a regulator to guide the causal discovery process of finding NSCGs.
2. > Assume target variable of interest is a leaf node?
**Re:** We greatly value this incisive question and clarify that our framework **doesn’t necessitate that the target variable be a leaf node**. In Definitions 3.1-3.2, the NSCG for a target variable contains only its parents and ancestors, not its children (if any). Our algorithm is flexible to contain a causal identification constraint (lines 269-271) to utilize if the target variable is a leaf node or drop this constraint where such information is unknown. To support our arguments, we have run additional real data analysis using data from Sachs et al. (2005), where we designated the protein Akt as the target (see **Figure 2 in GR**) but **removed the causal identification constraint**. As shown in **Table 4 in GR**, our method achieves the best performance in finding the NSCG for Akt.
3. > Is scale invariant?
**Re:** Thanks for this excellent question. NSCSL is scale-invariant when we appropriately choose the causal discovery base learner and model the treatment effects/POCs. Though NOTEARS lacks scale invariance, our method's flexibility allows for the integration of scale-invariant methods like PC/LinGAM. Additionally, under LSEM, the rescaling will not affect the relative rank of the features based on causal effects. In the nonlinear case, we proposed to use POCs which by their definitions are scale-invariant.
4. > More and newer causal discovery algorithms
**Re:** Thanks for this excellent suggestion. We've included four new baselines, including DAG-GNN, GSGES, FCI, and CAM. The new comparisons encompass Scenario 4 ($p=20$, $n=100,1000$) and new Scenario 5 ($p=50$, $n=1000,3000$), under varied settings. As displayed in **Tables 1-3 in GR**, our method outperforms all baseline methods.
5. > Title precision
**Re:** We greatly appreciate this suggestion and propose a refined title: *On Learning Necessary and Sufficient Causal Graphs for Target Variables*.
6. > The beer and diapers example clarification
**Re:** Yes, this is an example of including unnecessarily many variables of "diapers" that are spuriously related to predicting "beer".
7. > Assume no variable is a causal child of the target?
**Re:** Please refer to point #2.
8. > Scoring method and target effect estimation
**Re:** We acknowledge that our method primarily aims at estimating causal effects or POCs for feature selection and added that our approach could extend to diverse causal inquiries.
9. > Works by Janzing et al. as an alternative
**Re:** We've added Janzing et al. (2013 & 2020) that indeed shed light on causal feature selection in nonlinear contexts, to our related works. Unlike these works requiring a known graph, our method uniquely integrates causal graph learning with feature selection.
10. > Limitation to a discrete target variable
**Re:** The necessity for a discrete target variable is due to the meaningfulness of conditional probabilities in Definitions 4.1-4.3. Yet, we recognize that POCs can be extended to continuous outcomes with proper density functions of necessity and sufficiency, though estimating such concepts can be more challenging in practice. Our current choice simplifies the presentation and computation.
11. > Definition of $D_X$
**Re:** The edge set $D_X$ (see line 91) encompasses all the directed edges in the causal graph for the node set $X$. We’ve added more details.
12. > Definition 3.1 scope
**Re:** It not only includes direct parents and grandparents but all other (potentially further) ancestors, based on $PA_Y(\mathcal{G})$ defined in line 95.
13. > Why include $X_D$ in Example 4.7
**Re:** Our work learns NSCG in an unknown causal graph, so the inclusion of $X_D$ is indeed possible. Second, $X_D$ may be incorporated due to its strong correlation with $X_B$ in marketing reports. The practice of including many confounders aligns with the goal to learn the sufficient causal graph (see lines 22-25). Concerning the target node being a leaf node, please refer to point #2.
14. > Applicability to linear/nonlinear SCMs
Re: Our method indeed applies to both linear and nonlinear SCMs. Section 5 outlines the main algorithm for linear SCMs, while an iterative algorithm for nonlinear cases is provided in Appendix A.2. We have tested additional non-linear SCMs as reported in **new Table 2 of GR**, which shows our method consistently outperforms all baselines.
We would like to sincerely thank the reviewer for carefully reviewing our paper and recognizing our efforts! We have striven to address all concerns comprehensively, with all clarifications and discussions integrated into the revised manuscript. We would be happy to address further comments or suggestions if there are any and we look forward to hearing from you soon.
---
Rebuttal Comment 1.1:
Comment: I would like to thank the authors for their thoughtful response and detailed answers to the questions I raised.
The results from the new set of experiments are truly impressive. I might be misunderstanding something in the tables, but it seems that the other algorithms, based on their high FRD and SHD, performed significantly worse to the extent that they appear almost useless. Could you briefly comment on this and confirm there isn’t an error?
---
Reply to Comment 1.1.1:
Title: Response to Official Comment by Reviewer hThF
Comment: We sincerely appreciate your timely feedback and recognition of our response as well as the newly-conducted experiments. We understand your concerns about the observed high FRD and SHD in the tables, and we would like to clarify our experiment results and the evaluation metrics as follows.
1. **Evaluation Based on NSCG**: Our experiment results are evaluated against the true necessary and sufficient causal graphs (NSCGs) derived from Definition 3.2, a sub-structure of the full graph, not the full graphs themselves. The details, including the illustrations of true NSCGs and full graphs, can be found in Sections D.2 and D.3 in Appendix, where the true NSCG contains much fewer edges than the full graph.
2. **Baseline Performance**: We acknowledge that the selected baseline methods are capable of identifying the full causal graph given their model assumptions are met, however, they are less effective in finding the NSCGs, which is our primary focus in this study.
3. **High FDR and SHD Explanation**: The observation of high FDR and SHD stems from the baselines identifying many spurious/irrelevant nodes/edges that aren't part of the NSCG for the target outcome of interest. For example, in Scenario 4, the discrepancy between the true NSCG (with only 3 edges) and the full graph (containing over 35 edges) leads to an unavoidable SHD higher than 30.
4. **Proposed Method Performance**: Our method excels in recovering the true NSCG, not the full causal graph. Thus, while it demonstrates superior performance in this context, it would not outperform the baselines if the goal were to recover the full graph. We've clarified the evaluation metrics in our revised manuscript.
We hope these explanations adequately address your concern. We remain committed to enhancing our paper and are open to further questions or comments. Once again, thank you for your invaluable contribution to our work! | Summary: The paper deals with proposing a necessary and sufficient causal graph that explicitly only model the causal variables required rather than the complete causal graphs that can be inefficient and also introduce spurious correlations between variables.
Strengths: 1. Important problem, well-defined and well written. The NSC is intuitive and makes perfect sense.
2. Using or rather extending the POC concept to assess the necessity and sufficiency of features in determining the outcome is very interesting.
Weaknesses: 1. I have some concerns regarding the scalability of the method and the max number of samples considered is 100. Also it would have been ideal if some real world example was shown and experimented with.
2. Assuming Markovian condition limits the overall applicability of the methods especially in the real world scenarios.
3. Conclusion section reduced to only limitations and future work hindering the overall completeness of the paper. I know this would have been done due to the space limit but in my personal opinion every paper should have a proper conclusion section.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: 1. Can the authors comment on how the Markovian assumption limits the applicability of their method?
2. Another factor is that the variables considered are only binary. This again effects the applicability and it would be nice to see some discussion on this.
3. Considering 100 samples does not explicitly show the scalability of the method. Can some more large scale experiments be shown?
Overall a nice paper with a simple yet nice idea but not without its flaws. I lean towards acceptance as of now but have concerns about the applicability of the method in real world scenarios and will wait for the authors replies to make a final decision.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: No concerns here.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your valuable comments and suggestions! We greatly appreciate the reviewers’ acknowledgment that our work is an “important problem, well-defined and well written”, our proposed NSCG “is intuitive and makes perfect sense”, and that “using or rather extending the POC concept to assess the necessity and sufficiencyis very interesting”. We have carefully addressed all your questions and comments. In the following, your questions and comments are summarized in quotes, followed by our point-by-point responses. Please refer to **the one-page PDF in the general response (GR)** for all additional simulation/real-data results.
1. > …scalability of the method and the max number of samples considered is 100... Can some more large scale experiments be shown?
**Re**: Thank you for this constructive comment. We have added further simulation results, with the number of nodes increased to 50 (as new Scenario 5) and the sample size increased to 1000 and 3000 for Scenarios 4-5. As shown in **Tables 1-3 in GR**, our method excels in these enhanced settings which highlights our method's practical applicability.
2. > ...ideal if some real world example was shown and experimented with.
**Re**: We value this comment and have conducted additional real data analysis using the benchmark data from Sachs et al. (2005). To validate our method's capacity to find the NSCG and align with Definition 3.2, we designated the protein Akt as the target outcome. This designation ensures that NSCG exists (see **Figure 2 in GR**) and that finding an NSCG is meaningful. Our method and seven baseline methods were applied and evaluated against the true NSCG associated with the protein Akt. **Table 4 in GR** shows that our method achieves the best performance in finding the NSCG concerning the protein Akt.
3. > Assuming Markovian condition limits the overall applicability of the methods.
**Re**: Thank you for this valuable comment. We agree that the causal Markovian condition can be violated in some real-world applications such as unmeasured confoundings, and also would like to emphasize that this is also a limitation of most causal discovery algorithms. Without the causal Markovian condition, we can only get up to a partial graph. To provide a precise but sufficient explanation in causal graphical contexts to an outcome of interest, which is the main goal and motivation of our paper, we require a causal Markovian condition. However, we acknowledge the possible practical limitations of required such an assumption, and the extensions are of particular interest. We have included this in our open discussion as well.
4. > Conclusion reduced to only limitations and future work…every paper should have a proper conclusion.
**Re**: We are very grateful for this excellent suggestion. The revised conclusion of our paper is as follows:
*In this work, we introduced NSCSL that leverages causal effects/POCs to systematically assess feature importance while learning a causal graph. By identifying a subgraph closely related to the outcome, our method filters irrelevant variables, presenting a significant advancement in the field. Extensive empirical evaluations on simulated and real-world data underscore NSCSL's superior performance over existing algorithms, including important findings on yeast genes and the protein signaling network.*
*However, this promising advancement is not without limitations. First, NSCSL, like most existing causal structural learning methods, assumes no unmeasured confounders (A2) and the causal Markov condition. These assumptions may not hold in practice, leading to biased causal effect estimates and potential errors in the causal graph. Second, NSCSL employs absolute causal effects as a substitute for POCs to facilitate estimation in high-dimensional settings. Although theoretically consistent under certain conditions, examining the differences between these two methods in general feature and outcome spaces is an area for future research.*
5. > …variables considered are only binary…nice to see some discussion on this.
**Re**: Thank you for your insightful comments. Allow us to clarify that in our approach, the features $Z$ can encompass either discrete or continuous variables, while the outcome $Y$ is permitted to be a discrete random variable, not exclusively binary. Please refer to lines 99-101 for detailed information on this data structure flexibility.
Our proposed POCs indeed generalize the bi-variable and binary setting found in Tian & Pearl (2000) (see lines 156-157). The stipulation for a discrete outcome variable ensures that the conditional probabilities in Definitions 4.1-4.3 are meaningful. In our simulations, we selected the noise variables to be binary, thus creating discrete data for both features and outcomes (lines 297-299). Moreover, the binary requirement, mentioned solely for establishing theoretical consistency between causal impact based on treatment effects and evaluation by POCs (Theorem 4.6, lines 197-198), does not constrain our NSCFL in practice.
Finally, we acknowledge that the proposed POCs could be extended to handle continuous outcomes by defining an appropriate density function of necessity and sufficiency. However, estimating such a concept would be more challenging in practice relative to our current approach, which relies on discrete outcomes. We have chosen this path as it simplifies the presentation and facilitates a more practical analysis of causal relationships.
We would like to sincerely thank the reviewer for reviewing our paper! We have strived to address all the reviewers' concerns appropriately. All the above clarifications and discussions have been incorporated into the revised paper. We would be delighted to entertain further comments or suggestions if any, and we eagerly anticipate your feedback.
---
Rebuttal Comment 1.1:
Title: Thank you for your response
Comment: I would like to thank the authors for their response and the new results. My concerns are clarifies and thus I have raised my score accordingly to 6.
---
Reply to Comment 1.1.1:
Title: Thank you note to Reviewer Qe9n
Comment: Thank you very much for your kind acknowledgment of our response and the newly-added results. We are delighted to hear that your concerns have been clarified, and we appreciate the increased score. If there are any further questions or areas of interest, please don't hesitate to reach out. We remain committed to engaging with your insights and making our work as strong as possible. Thank you again for your thoughtful review and support! | Summary: This paper is concerned with learning causal graphs from observational data. In particular, the authors propose to learn a subgraph of the full causal graph, which they refer to as necessary and sufficient causal graphs (NSCGs). They propose an algorithm which measures conditional probabilities of causation between variables to measure the causal effect of some treatment variable on the target variable. They evaluate their algorithm on synthetic and one real dataset and compare its performance against three other popular CD algorithms.
Strengths: The paper is well written and interesting to read. The authors provide illustrative examples that help understand the theory and practical implications!
Weaknesses: - Not all causal discovery algorithms assume causal sufficiency, e.g. FCI [A] and related algorithms, output a CPDAG without assuming causal sufficiency. There exists some other notable work on this topic [e.g. B, C]
- I find the experimental evaluation on synthetic data to be too limited. The authors compare their method only to three other Causal Discovery Algorithms (including NOTEARS, which is actually not suitable for causal discovery because of missing scale-invariance [D]). In particular, they do not compare their method to algorithms that do not assume causal sufficiency (e.g. FCI).
- On the one real-world dataset, the authors compare their method to NOTEARS only.
- I find it difficult to follow their arguments, why the proposed algorithm performs better than NOTEARS on the real-world dataset. Statements like “This gene is required for sulfur amino acid synthesis” (L. 349) need a reference and further explanation.
[A] P. Spirtes, C. Glymour, and R. Scheines, Causation, Prediction, and Search. The MIT Press, 2001. doi: 10.7551/mitpress/1754.001.0001.
[B] J. M. Ogarrio, P. Spirtes, and J. Ramsey, “A Hybrid Causal Search Algorithm for Latent Variable Models,” in Proceedings of the Eighth International Conference on Probabilistic Graphical Models, PMLR, Aug. 2016.
[C] R. Bhattacharya, T. Nagarajan, D. Malinsky, and I. Shpitser, “Differentiable Causal Discovery Under Unmeasured Confounding,” Proceedings of the International Conference on Artificial Intelligence and Statistics, vol. 130, Apr. 2021.
[D] M. Kaiser and M. Sipos, “Unsuitability of NOTEARS for Causal Graph Discovery when Dealing with Dimensional Quantities,” Neural Processing Letters, vol. 54, no. 3, 2022.
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: - I found a small typo in L. 105: “while keeping the rest [of the] model unchanged”
- Could the authors comment on whether they think using the Structural Intervention Distance (SID) [E] would be a meaningful additional metric for their empirical evaluation?
- Why do the authors evaluate their algorithm on a single real-world dataset only? Why did they choose this particular dataset and not one of the more famous ones, e.g. [F]?
- Can the authors comment on whether their algorithm is scale-invariant, given NSCSL uses absolute causal effects?
[E] Jonas Peters and Peter Bühlmann. Structural intervention distance for evaluating causal graphs. Neural Computation, 27(3):771–799, 2015.
[F] K. Sachs, O. Perez, D. Pe’er, D. A. Lauffenburger, and G. P. Nolan, “Causal Protein-Signaling Networks Derived from Multiparameter Single-Cell Data,” Science, vol. 308, no. 5721, pp. 523–529, Apr. 2005.
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 3 good
Contribution: 2 fair
Limitations: The authors provide an open discussion on the limitations of their proposed algorithm, which is greatly appreciated! Another limitation might be the strong assumptions on conditionals, however, this is a limitation of most causal discovery algorithms.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your valuable comments and suggestions! We're gratified by the reviewers’ recognition of our paper as well-written and interesting, with illustrative examples and an open discussion on limitations. We have carefully addressed all your questions and comments. In the following, your questions and comments are summarized in quotes, followed by our point-by-point responses. Please refer to **the one-page PDF in the general response (GR)** for all additional simulation/real-data results.
1. > Not all causal discovery algorithms assume causal sufficiency, e.g. FCI [A]…[B, C]
**Re:** Thanks for your insightful comment. We agree that not all causal discovery algorithms assume causal sufficiency, which only yields a partial graph. Yet, we require causal sufficiency (i.e., A2) to find a precise and adequate causal graph to explain the target outcome. We recognize this potential limitation and the extensions to works such as [A, B, C] are intriguing. Specifically, we have outlined an iterative algorithm in Appendix A.2, which employs an arbitrary causal discovery method and subsequently estimates POCs until convergence, thereby allowing the integration of FCI without assuming causal sufficiency.
2. > ...experimental evaluation...too limited
**Re:** This very constructive comment led us to add more simulations, with the number of nodes increased to 50 (new Scenario 5) and the sample size increased to 1000 and 3000 for Scenarios 4-5. As shown in **Tables 1-3 in GR**, our method excels in these enhanced settings which highlights our method's practical applicability.
3. > ...compare...only to three…NOTEARS…missing scale-invariance [D]...not compare…FCI
**Re:** Thanks for this excellent suggestion. We've expanded comparison studies with four additional baselines, including FCI, DAG-GNN, GSGES, and CAM. The new comparisons encompass S4 ($p=20$, $n=100,1000$) and new S5 ($p=50$, $n=1000,3000$), under varied settings. As shown in **Tables 1-3 in GR**, our method outperforms all baselines. Additionally, we acknowledge NOTEARS' missing scale invariance but emphasize that our method’s flexibility allows integration with various causal discovery methods such as FCI aforementioned.
4. > ...real...compare...to NOTEARS only
**Re:** Your constructive comment led us to conduct additional real data analysis using all baseline methods. The summarized results in **Table 5 in GR** highlight our method's ability to identify relevant genetic influencers for the variant YER124C without contamination by irrelevant genes. See also the response to point #9 for the additional analysis of data from Sachs et al. (2005).
5. > ...why…better than NOTEARS…L349...need…explanation
**Re:** We appreciate this insightful suggestion and have included more references and explanations. Here, YLR303W is essential for sulfur amino acid synthesis[1]. And YER124C of interest is a daughter cell-specific protein involved in cell wall metabolism[2]. It has been shown that sulfur amino acid synthesis can influent cell wall metabolism[5][6]. These indicate that NSCGL which additionally identified YLR303W performs better than NOTEARS.
[1] Brzywczy J (1993) Role of O‐acetylhomoserine in sulfur amino acid synthesis. Yeast.
[2] Colman-Lerner A (2001) Yeast daughter-specific genetic programs. Cell.
[3] Takahashi H (2001) Sulfur economy and cell wall biosynthesis. Plant physiology.
[4] de Melo (2019) The regulation of the sulfur amino acid. Scientific Reports.
6. > typo in L 105
**Re:** Thank you so much for pointing this out. We’ve corrected this typo.
7. >...comment on.. the Structural Intervention Distance (SID) [E]
**Re:** We truly appreciate this constructive suggestion. Indeed, we acknowledge that SID is a noteworthy metric for causal discovery evaluation. We believe that adding SID could significantly enhance our empirical studies. Unfortunately, the SID R package is currently unavailable in CRAN (see https://cran.r-project.org/web/packages/SID/index.html), and other tools like 'cdt' rely on this R package to compute SID. We are in the process of implementing SID into Python, but this effort is beyond the scope of the current rebuttal period. We are committed to including this metric once it becomes available.
8. > Why…evaluate…on a single real dataset only? Why…not…[F]?
**Re:** Thank you for this insightful comment. We have run additional real data analysis using the benchmark data from Sachs et al. (2005). To align with Definition 3.2, we designated the protein Akt as the target outcome. Our method and all baselines were applied and evaluated against the true NSCG for Akt (see **Figure 2 in GR**). **Table 4 in GR** shows that our method achieves the best performance in finding the NSCG.
9. > ...is scale-invariant, given NSCSL uses absolute causal effects?
**Re:** We appreciate this excellent inquiry. NSCSL is scale-invariant when we appropriately choose the causal discovery base learner and model the treatment effects/POCs. Though NOTEARS lacks scale invariance, our method's flexibility allows for the integration of scale-invariant causal discovery methods like NSCGL with FCI. Additionally, under LSEM, the rescaling will not affect the relative rank of the features based on absolute causal effects. In the nonlinear case, we proposed to use POCs which by their definitions are scale-invariant.
10. > ...assumptions on conditionals
**Re:** We agree with your observation and appreciate your acknowledgment that assumptions on conditionals are indeed strong but commonly represented in most causal discovery algorithms. We have taken care to include this point in our open discussion.
We would like to sincerely thank you for reviewing our paper! We have tried to address all your concerns in a proper way. All the above clarifications and discussions have been included in the revised paper. We would be happy to address further comments or suggestions if there are any and we look forward to hearing from you soon.
---
Rebuttal Comment 1.1:
Title: Eagerly Looking Forward to Feedback on Our Response
Comment: We sincerely appreciate the time and effort you've devoted to reviewing our work, and to providing many valuable and insightful feedback!
Following your constructive suggestions, we have conducted five additional sets of experiments and have further clarified our assumptions, base learner, scale-invariance, and real data evaluation. All of these details can be found in our response and within the one-page PDF file.
We sincerely hope our further clarifications and experiments can fully address your concerns and can be helpful in the evaluation of our work. We are eagerly looking forward to your kind feedback! | Summary: This paper studies the problem of feature selection when performing causal discovery and contributes to the limited literature in this field. Given a set of features, the main goal is to learn a causal graph from a subset of these features such that the learned graph only contains features that are “necessary and sufficient” for explaining an outcome of interest. This paper then develops two notions of quantifying the necessary and sufficient features using POC (probability of causation) and DE (direct effect)/TE (total effect). Given the measures to quantify the relevant features, they formulate a learning algorithm that jointly selects these relevant features (including the outcome of interest) and learns the causal graph. They also experimentally verify their method on four synthetic and one real-world dataset and show improvement over other baselines.
Strengths: 1. **Clarity**: This paper is clearly written and easy to follow. Though there is one limitation to the motivation of using POC (probability of causation) as the main metric to quantify the spurious features (see comment1 in the weakness part of the review).
2. **Significance**: This paper brings the notion of variable selection when learning the causal graph which is less explored in the literature.
3. **Theory:** Theorem 4.6 connects the two possible quantities i.e. POC and DE/TE, that can be used to quantify the spuriousness of each other and gives an expression for their lower bound in terms of marginals from the observational distribution which is novel.
4. **Experiment**: Across synthetic and real datasets one of their method i.e. NSCSL-TE shows consistent improvement compared to other baselines considered over different considered metrics (FDR, TPR, and SHD).
Weaknesses: 1. **Why POC instead of directly considering DE/TE**: Shouldn’t all the features with non-zero DE/TE on the outcome of interest should be considered in the final selected graph? Why POC is more fundamental than DE/TE is not properly motivated. I understand that both quantities are related (as stated in Theorem 4.6) but if the goal is to remove the spurious features when creating the causal graph then shouldn’t we directly use the TE/DE?
2. **Theory**: There is no guarantee that selecting the features based on the POC/DE/TE will converge towards the actual set of necessary and sufficient features as admissible by Definition 3.2.
3. **Experiment**: The synthetic setup considered in the paper assumes linear SEM as the data-generating process that might be limiting for the real-world DGPs (data-generating process). In the appendix, the author extends their algorithm for non-linear DGPs. It would be interesting to see some results on synthetic datasets with non-linear DGPs that could further attest to the applicability of their method in real-world scenarios in addition to one real-world dataset already considered in the paper.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. **Theory**: Line 125-126 state that Definition 3.1 refer to the sub-structure in the whole graph $G_{O}$ containing directed edges or path towards Y. It is not clear how the constraint on conditional probability distribution in Definition 3.1 allows $G_{V}$ to have features/nodes from $G_{O}$ that have a path towards Y. An example or an explanation will be helpful.
2. **Experiment**: Why does TPR decrease for NSCSL-DE for the S2-S4 on increasing the sample size?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: NA
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your insightful comments and suggestions. We greatly appreciate your acknowledgment that our work "of variable selection when learning the causal graph is less explored in the literature", "Theorem 4.6 connects the two possible quantities is novel", our method "shows consistent improvement compared to other baselines", and our paper "is clearly written and easy to follow". We have carefully addressed all your questions and comments. In the following, your questions and comments are summarized in quotes, followed by our point-by-point responses. Please refer to **the one-page PDF in the general response (GR)** for all additional simulation/real-data results.
1. > Why POC instead of directly considering DE/TE?
**Re:** Thank you for this valuable comment. POC, initially proposed by Pearl et al. (2000), assesses both necessity and sufficiency in a bi-variable, binary setting. In contrast, DE/TE quantifies the impact of treatment on the outcome by increasing it by one unit (see Pearl et al. (2009)). While we acknowledge that non-zero DE/TE should indeed be regarded as necessary, our contribution lies in connecting these two different concepts and demonstrating their equivalence under certain conditions. We emphasize that we are not arguing for the superiority of POC over DE/TE; rather, we find POC more conventional for introducing necessity and sufficiency in the existing literature (see Pearl et al. (2000), Tian & Pearl (2000), and Wang & Jordan (2021)).
2. > There is no guarantee that selecting the features based on the POC/DE/TE.
**Re:** Thank you for this insightful comment. The challenge in developing a theory for the selected features is multi-faceted. Our method's unique aspect is that we learn the causal graph while selecting causal features, meaning the consistency of the selected features depends on the causal graph estimation's consistency. Further, developing a post-estimation selection is challenging, and the ambiguity in defining consistency arises from balancing losses of causal discovery and lower bounds of POCs/causal effects. While this presents a complex theoretical task, we provide further insights by establishing the consistency of estimated causal graphs:
***Theorem A***: *Assume that $B$ and $O$ follow LSEM with independent Gaussian error and equal variance and that the true ordering of $B$ is consistently estimated. The estimated matrix $\widehat{B}$ minimizing the loss in Equation (5) converges to the true $B$ with the probability going to 1 as $n\to\infty$.*
The conditions in Theorem A align with those commonly imposed in causal structural learning (e.g., Shi et al. (2021)). Our proof follows similar strategies but accounts for the extra penalty term from causal effects. Notice that the explicate forms of causal effects under LSEM are linear combinations of elements of $B$ (see lines 239-249). This implies our new regulation can similarly vanish away as $n$ goes to infinity. We've included the full proof in the revised paper, although omitted here due to character limitations. In addition, this consistency is also empirically verified by the simulation studies.
3. > The synthetic setup for non-linear DGPs (data-generating process).
**Re:** We greatly value your recommendation and have tested additional non-linear DGPs for the sample size $n=1000$ and the number of nodes $p=20$. As in **new Table 2 of GR**, the proposed method consistently outperforms all baselines, which demonstrates its applicability in handling complex scenarios. In addition, we further conducted additional real data analysis using the benchmark data from Sachs et al. (2005). To validate our method's capacity to find the NSCG and align with Definition 3.2, we designated the protein Akt as the target outcome (see **Figure 2 in GR**). Our method and seven baseline methods were applied and evaluated against the true NSCG associated with the protein Akt. **Table 4 in GR** shows that our method achieves the best performance in finding the NSCG concerning the protein Akt.
4. > How Definition 3.1 allows to have features/nodes from that have a path towards Y.
**Re:** Thank you for your excellent question. Definition 3.1 focuses on the causal chain starting from outcome $Y$ and traces back to $Y$'s parents and ancestors, as represented by the set $PA_Y(\mathcal{G})$ (defined in line 95). By comparing the joint probabilities of $Y$’s parents/ancestors and $Y$ itself in the full graph $\mathcal{G}_O$ with that in the sub-graph $\mathcal{G}_V$, we can identify the sub-structure (maybe non-unique per sufficiency’s definition) that containing directed edges or path towards $Y$ to achieve the same joint distribution. Example 4.7 illustrates this process, demonstrating how either a graph containing $[X_F,X_B,X_D]$ or a subgraph with $[X_F,X_B]$ can be a sufficient graph. The application of Definition 3.2 further refines this to identify the minimal substructure, i.e., $X_F\to X_B$, as the NCSG for node $X_B$.
5. > Why does TPR decrease for NSCSL-DE for the S2-S4 on increasing the sample size?
**Re:** This is a indeed keen observation! As we elaborated in lines 325-327, NSCSL based on TE identifies all causal paths towards the outcome, while NSCSL-DE only uncovers direct relationships. Given that the graphs in S2-S4 contain both direct parents and ancestors, NSCSL-DE retrieves only a subset of the true NSCG, leading to a slightly lower TPR. As the sample size grows, this TPR further decreases, converging to the true rate, reflecting the proportion of direct parents among all parents/ancestors.
We extend our heartfelt appreciation for your thoughtful review and comments. We have made every effort to respond to your concerns accurately and have incorporated these explanations into the revised paper. Please do not hesitate to share any additional comments or questions, as we look forward to your further insights and hope to continue improving our work under your expert insights.
---
Rebuttal Comment 1.1:
Title: Eagerly Looking Forward to Feedback on Our Response
Comment: We greatly appreciate the time and effort you've invested in reviewing our work and providing insightful and constructive feedback!
In response to your valuable suggestions, we have conducted additional experiments on the non-linear case and further clarified our motivation, definition, and simulation results, followed by an additional theory for graph consistency.
We earnestly hope that these clarifications and expanded experiments will thoroughly address your concerns. We are eagerly looking forward to your kind feedback! | Rebuttal 1:
Rebuttal: We extend our heartfelt thanks to all reviewers for their insightful comments and suggestions. We are encouraged by their highlight of **various acknowledgments**, which affirm the quality and novelty of our work, as summarized below:
- Reviewer vBqs appreciated the novel NSCGL method and its theoretical backing and commended the experiments on synthetic and real data.
- Reviewer hnpE lauded the paper for exploring less-trodden grounds in variable selection for causal graph learning, noting the novelty of Theorem 4.6 and the method's consistent improvement over other baselines. The clear writing style was also praised.
- Reviewer LXJs recognized the well-written paper, illustrative examples, and open discussion on limitations, finding it interesting to read.
- Reviewer Qe9n approved the work as an essential and well-defined problem, appreciated the intuitive NSCG, and found the extension of the POC concept intriguing.
- Reviewer hThF commended the motivation, introduction, comprehensive definitions, and explanations, recognizing the relevance of the method across various causal inference problems.
### **Summary of Common Comments and Added Simulation/Real-data Results**
Next, we summarize the **common questions/comments raised by the reviewers** in quotes and then provide our point-by-point responses. Please refer to **the one-page PDF in the general response (GR)** for all additional simulation/real-data results we conducted.
1. > The generated data is limited to only 100 samples for the first three scenarios (Reviewers vBqs, LXJs, Qe9n)
**Re**: We have added further simulation results, with the number of nodes increased to 50 (as new Scenario 5) and the sample size increased to 1000 and 3000 for Scenarios 4-5. As shown in **Tables 1-3 in GR**, our method excels in these enhanced settings which highlights our method's practical applicability.
2. > Include more and new state-of-the-art algorithms (Reviewers vBqs, LXJs, hThF)
**Re**: We've expanded the comparison studies to include four additional state-of-the-art methods, including DAG-GNN (suggested by reviewer vBqs), GES with generalized score (GSGES, suggested by reviewer vBqs), FCI (suggested by reviewer LXJs), and CAM (a generalized version of LinGAM). The new comparisons encompass Scenario 4 ($p=20$, $n=100,1000$) and new Scenario 5 ($p=50$, $n=1000,3000$), under varied settings. As displayed in **Tables 1-3 in GR**, our method outperforms all baseline methods.
3. > Incorporate the real dataset from Sachs et al. (2005) (Reviewers vBqs, LXJs, Qe9n)
**Re**: We have conducted additional real data analysis using the benchmark data from Sachs et al. (2005). To validate our method's capacity to find the NSCG and align with Definition 3.2, we designated the protein Akt as the target outcome. This designation ensures that NSCG exists (see **Figure 2 in GR**) and that finding an NSCG is meaningful. Our method and seven baseline methods were applied and evaluated against the true NSCG associated with the protein Akt. **Table 4 in GR** shows that our method achieves the best performance in finding the NSCG concerning the protein Akt.
4. > The synthetic setup for non-linear DGPs (data-generating process) (Reviewers hnpE, hThF)
**Re:** We have tested additional non-linear DGPs for the sample size $n=1000$ and the number of nodes $p=20$. As in **new Table 2 of GR**, the proposed method consistently outperforms all baselines, which demonstrates its applicability in handling complex scenarios.
### **Additional Notable Suggestions and More Simulation/Real-data Results**
Besides the common comments we received, we would like to further summarize **other additional simulation/real-data results we provided** to adjust many notable and valuable suggestions from the reviewers.
5. > A comparison of the computational requirements and runtime (Reviewer vBqs)
**Re**: We have included the average running time of NSCSL against benchmarks in all simulation settings. **New Tables 1-3 in GR** reveal that NSCSL is as fast as the quickest benchmarks such as PC, LinGAM, and FCI, and significantly faster than others like DAGGNN, GSGES, and CAM. Our method's integration of treatment effects into the optimization adds efficiency and restricts the searching space, making it practical and even beating NOTEARS in computation.
6. > Diverse synthetic data such as the Barabási–Albert/scale-free model (Reviewer vBqs)
**Re**: We have included additional simulation results based on the scale-free (SF) model, comparing them to the ER model used in the original paper. Comparing the results of new Scenario 5 ($p=50$, $n=1000$) in **Table 1 and 3 in GR**, our method consistently performs the best in finding the NSCG, regardless of the synthetic data models used.
7. > Parameter sensitivity analysis should be included (Reviewer vBqs)
**Re**: We have conducted comprehensive sensitivity analyses concerning all hyperparameters listed in Table D.1 in the appendix. This includes the L1 penalty, the maximum ascent steps, the tolerance level, and the upper limit of the dual updating. These results, evaluated using Scenario 4 ($p=20$, $n=1000$), are presented in **Figure 1 in GR**, indicating that our method remains robust to these parameters, provided they are set within a reasonable range.
8. > Real data compare to NOTEARS only (Reviewer LXJs)
**Re:** We have conducted additional real data analysis using all baseline methods. The summarized results in **Table 5 in GR** highlight our method's ability to identify relevant genetic influencers for the variant YER124C without contamination by irrelevant genes.
We extend our sincere gratitude to all reviewers' thorough examination of our paper. These insights and suggestions have substantially enriched our work. All the above clarifications, discussions, and improvements are now part of the revised manuscript. We are eager to address any further comments or suggestions and look forward to all your continued feedback.
Pdf: /pdf/cc7ab2a21bfc59978d966b186e634dca5fa5c99b.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: This paper addresses the challenge of discovering causal relationships among variables within a complex graph by proposing the learning of necessary and sufficient causal graphs (NSCGs). Unlike existing methods that consider all variables in the graph, NSCGs exclusively consist of causally relevant variables for a specific outcome of interest, referred to as causal features. The authors introduce the concept of probabilities of causation to assess the importance of features in the causal graph and identify a subgraph relevant to the outcome.
Strengths: 1. The problem (i.e., learning a class of necessary and sufficient causal graphs) studied in this paper is interesting;
2. The necessary and sufficient causal structural learning (NSCSL) algorithm offers a new method to learn NSCGs from data;
3. The authors have provided theoretical support for their approach;
4. The experiments were conducted on both synthetic and real data.
Weaknesses: 1. This paper does not explicitly discuss the computational complexity of the NSCSL algorithm compared to other existing algorithms. Causal structure learning is often computational expensive, which significantly restricts its practical utility when the real system is complex with hundreds or even thousands of features. Providing a comparison of the computational requirements and runtime of the proposed algorithm could further enhance the assessment of its workload. This becomes particularly important when considering variable selection methods in causal graphs. Without such a comparison, it remains uncertain whether NSCSL would be practically viable, as it might prove to be more computationally expensive than other established methods. Thus, addressing this aspect is paramount to establishing the practical utility and potential applicability of NSCSL in real-world scenarios.
2. The current experimental results do not convincingly demonstrate the effectiveness of the proposed method. Concerning the synthetic data, the authors have not provided a clear explanation of the ER model and the reasons for choosing it over other models such as the Barab´asi–Albert model or scale-free model. It would be more appropriate to utilize different models to generate diverse synthetic data. Additionally, the generated data is limited to only 100 samples or features for the first three scenarios, making it less practically applicable. Moreover, the evaluation's reliance on relatively old baselines (NOTEARS, PC, ICA) is questionable, and it should include state-of-the-art algorithms like DYNOTEARS, DAG-GNN, and GSGES. Regarding the real data, the paper relies solely on one gene data without a ground-truth causal structure. To enhance the validity, benchmark data sets like the real dataset from Sachs et al. (2005) should be incorporated, as it comes with a consensus network that is accepted by the biological community. Finally, there are several parameters in the proposed method. Parameter sensitivity analysis should be included.
K. Sachs, O. Perez, D. Pe’er, D. A. Lauffenburger, and G. P. Nolan. Causal Protein-Signaling Networks Derived from Multiparameter Single-Cell Data. Science, 2005.
3. This paper is not easy to follow due to so many abbreviations and mathematical notations. To enhance the paper's readability, including a notation table and full form of abbreviations on the current page would be beneficial, ensuring that readers can easily understand the content without the need for constant cross-referencing.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: 1. What are the computational complexity and runtime of the NSCSL algorithm compared to other existing algorithms?
2. Why choosing ER model, not the other models, to generate the data?
3. Why not consider more recent algorithms as the baselines?
4. The pros and cons of NSCSL vs. variable selection methods in causal graphs or backtracking on causal graphs from the outcomes of interest.
5. The authors should conduct parameter sensitivity analysis in the experiment.
I have read the author’s rebuttal. Some of my concerns have been addressed.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 2 fair
Contribution: 2 fair
Limitations: As pointed out by the authors, one potential limitation is the assumption of no unmeasured confounders, which may not always hold in real-world scenarios.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your valuable comments and suggestions! We are encouraged by your acknowledgment of the interesting task of learning a class of NSCG, our novel NSCGL method, and the theoretical support, coupled with experiments on both synthetic and real data. Below, we summarize your questions and comments in quotes and provide our point-by-point responses. Please refer to **the one-page PDF in the general response (GR)** for all additional simulation/real-data results.
1. > The computational complexity of the NSCSL algorithm.
**Re**: We appreciate this insightful comment. The computational complexity of NSCSL comprises two parts: the cost from causal discovery as $g(n,p)$, and the estimation of causal effects/scores $f(n,p)$, where $n$ is the data sample size and $p$ is the number of nodes. In the linear case, our method learns the features and causal graph through single-step optimization (detailed in Section 5 and Equation (5)), with complexity cubic in the number of nodes, $g(n,p) = \mathcal{O}(p^3)$ following Zheng et al. (2018). Here, the causal effect computation is linear-time and thus is dominated. In the nonlinear case, according to Appendix A.2, the time complexity depends on the base causal discovery method and the number of max iterations $K$, yielding $ \mathcal{O}[K(g(n,p) + f(n,p))]$. Supporting runtime details follow in the next response.
2. > A comparison of the computational requirements and runtime.
**Re**: Thank you for this excellent comment. We have included the average running time of NSCSL against benchmarks in all simulation settings. **New Tables 1-3 in GR** reveal that NSCSL is as fast as the quickest benchmarks such as PC, LinGAM, and FCI, and significantly faster than others like DAGGNN, GSGES, and CAM. Our method's integration of treatment effects into the optimization adds efficiency and restricts the searching space, making it practical and even beat NOTEARS in computation.
3. > Diverse synthetic data such as the Barabási–Albert/scale-free model.
**Re**: We really appreciate your great suggestion. We have included additional simulation results based on the scale-free (SF) model, comparing them to the ER model used in the original paper. Comparing the results of new Scenario 5 ($p=50$, $n=1000$) in **Table 1 and 3 in GR**, our method consistently performs the best in finding the NSCG, regardless of the synthetic data models used.
4. > The generated data is limited to only 100 samples for the first three scenarios.
**Re**: Thank you for this constructive comment. We have added further simulation results, with the number of nodes increased to 50 (as new Scenario 5) and the sample size increased to 1000 and 3000 for Scenarios 4-5. As shown in **Tables 1-3 in GR**, our method excels in these enhanced settings which highlights our method's practical applicability.
5. > Include state-of-the-art algorithms like DYNOTEARS, DAG-GNN, and GSGES.
**Re**: Thank you for this excellent suggestion. We've expanded the comparison studies to include four additional state-of-the-art methods, including DAG-GNN, GSGES, FCI, and CAM. Since DYNOTEARS targets non-stationary and time-series data, it does not apply to our focus. The new comparisons encompass Scenario 4 ($p=20$, $n=100, 1000 $) and new Scenario 5 ($p=50$, $n=1000,3000$), under varied settings. As displayed in **Tables 1-3 in GR**, our method outperforms all baseline methods.
6. > Incorporate the real dataset from Sachs et al. (2005).
**Re**: We value this comment and have conducted additional real data analysis using the benchmark data from Sachs et al. (2005). To validate our method's capacity to find the NSCG and align with Definition 3.2, we designated the protein Akt as the target outcome. This designation ensures that NSCG exists (see **Figure 2 in GR**) and that finding an NSCG is meaningful. Our method and seven baseline methods were applied and evaluated against the true NSCG associated with the protein Akt. **Table 4 in GR** shows that our method achieves the best performance in finding the NSCG concerning the protein Akt.
7. > Parameter sensitivity analysis should be included.
**Re**: Thank you for your excellent suggestion. We have conducted comprehensive sensitivity analyses concerning all hyperparameters listed in Table D.1 in the appendix. This includes the L1 penalty, the maximum ascent steps, the tolerance level, and the upper limit of the dual updating. These results, evaluated using Scenario 4 ($p=20$, $n=1000$), are presented in **Figure 1 in GR**, indicating that our method remains robust to these parameters, provided they are set within a reasonable range.
8. > Abbreviations and mathematical notations.
**Re**: We appreciate this constructive feedback and have compiled a notation table and the full forms of abbreviations mentioned throughout the paper. This information is now included in the revised paper.
9. > The pros and cons of NSCSL vs variable selection methods.
**Re**: This insightful comment prompted us to clarify the distinctions between our proposed method and existing variable selection methods. The primary advantage of NSCSL is that it concurrently learns the causal graph while selecting the causal features. In contrast, existing methods, including variable selection in causal graphs or backtracking on causal graphs, depend on a true or known graph to conduct feature selection for the outcome or target of interest. While our approach introduces additional time costs to learn the unknown causal graph, along with potential estimation errors, its holistic approach offers significant benefits in uncovering the causal features.
We extend our sincere gratitude to your thorough examination of our paper. These insights and suggestions have substantially enriched our work. All the above clarifications, discussions, and improvements are now part of the revised manuscript. We are eager to address any further comments or suggestions and look forward to your continued feedback.
---
Rebuttal Comment 1.1:
Title: Eagerly Looking Forward to Feedback on Our Response
Comment: We deeply appreciate the time and effort you have devoted to reviewing our work and providing us with insightful, detailed, and encouraging feedback.
Following your constructive suggestions, we have conducted six additional sets of experiments and have further clarified our computational complexity and advancements. All of these details can be found in our response and within the one-page PDF file.
We sincerely hope our further clarifications and experiments can fully address your concerns. We are eagerly looking forward to your kind feedback! | null | null | null | null | null | null |
When Do Graph Neural Networks Help with Node Classification? Investigating the Homophily Principle on Node Distinguishability | Accept (poster) | Summary: This paper addresses the question: when do graph-aware models outperform graph-agnostic ones on node classification tasks? First, theoretical analysis is conducted. Most of the analysis assumes the two-class CSBM-H model – a generalization of the standard stochastic block model, where node features are sampled from normal distributions (with parameters depending on class labels), and parameter $h$ controls the homophily level. Then, Probabilistic Bayes Error and negative generalized Jeffreys divergence are used to quantify node distinguishability for several node representations: the original node features, aggregated node features (one step of random walk-based aggregation), and high-pass filtered features (after one step of the corresponding aggregation). The analysis shows that different representations are beneficial for different homophily regimes.
In the experiments, it is first shown that if a model has more distinguishable representation, then it usually performs better (Section 4.1). In Section 4.2, it is proposed to predict whether graph-aware models are better than graph-agnostic ones by training a simple classifier on original and aggregated features.
Strengths: The theoretical analysis of PBE and Jeffreys divergence for the CSBM-H and the discussion in Section 3.4 are intuitive and allow one to understand why different aggregation types are helpful for different homophily regimes.
Weaknesses: - The analysis assumes a particular aggregation (random walk), and it is not clear whether conclusions may change for other aggregations.
- The paper is limited to heterophilous datasets known to have certain drawbacks [1,2].
- The proposed measure (Section 4.2) is quite straightforward: it suggests predicting the relative performance of a GNN by training its simplified variant.
- The paper is hard to follow in several places.
I also have the following comments and questions.
1. On the definition of CSBM-H. According to the definition, $h \cdot d_0$ and $h \cdot d_1$ have to be integer. Also, possible values of $d_0$ and $d_1$ depend on the class sizes (the number of outgoing edges should be equal for two classes). This would not be necessary if the graphs are assumed to be directed, but this is not the case according to Section 2.
2. There is another simple measure called Label Informativeness [3] that is known to better agree with GNN performance than homophily [3].
3. The explanation in lines 168-172 was unclear to me; a more detailed explanation would help.
Minor:
- Definition of $h_k$ in (2): $Z_{v,k}$ does not depend on $u$, so it can be moved to the subscript of the sum.
- L112: it is written that all homophily measures are feature-independent, but $H_{GE}$ depends on features.
- Definition 1: notation $P(CL_{Bayes}(x))$ is not clear.
Some typos:
- L48: “Literatures”
- L100: inconsistency of singular and plural
- L111: “imply” → “implying”
- L120 and below: quotes are typed incorrectly, it should be ``like this'' in latex
- L149: “the the”
- L170: “the fixed the classifier”
[1] Lim D. et al. Large scale learning on non-homophilous graphs: New benchmarks and strong simple methods. NeurIPS 2021.
[2] Platonov O. et al. A critical look at the evaluation of GNNs under heterophily: Are we really making progress? ICLR 2023.
[3] Platonov O. et al. Characterizing graph datasets for node classification: Beyond homophily-heterophily dichotomy. ArXiv:2209.06177, 2022.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: The pipeline in Section 4.1 seems overcomplicated. There are two statistical tests, and one is used to define Prop(GCN). It seems that here one can compute the proportion of nodes whose average intra-class node distance is smaller than the inter-class node distance. Would this work or statistical test on this step is necessary?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 2 fair
Limitations: Limitations are not discussed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ### Q1:
The paper is limited to heterophilous datasets known to have certain drawbacks [1,2].
### R1:
Although our paper studies the effect of heterophily, our analysis and experiments are not limited to heterophilous datasets. In fact, the analysis in Section 3 investigate the impact of graph structures under different homophily levels (from high to low, or $h$ from 1 to 0) comprehensively. For the experimental part in Section 4, the real-world benchmark datasets include both homophilic ($\textit{Cora, CiteSeer, PubMed}$) and heterophilic ($\textit{Cornell, Wisconsin, Texas, Film, Chameleon, Squirrel}$) graphs. Additionally in Appendix G.4, we test performance metrics on large-scale benchmark datasets, which also cover both homophilic and heterophilic datasets.
### Q2:
The proposed measure (Section 4.2) is quite straightforward: it suggests predicting the relative performance of a GNN by training its simplified variant.
### R2:
We would like to clarify and restate the contributions of the proposed metric: 1. As mentioned in Section 4.2, instead of "training its simplified variant", we emphasized that the qualified classifier should not require "training"; 2. The main contribution of the proposed metric is not only using an 'untrained' classifier, but also leveraging the proposed principle "intra-class embedding distance is smaller than the inter-class embedding distance" to construct it. The new metrics based on the above principle can provide accurate statistical threshold value to predict the superiority of G-aware models and is verified to be effective. In addition, we are the first to introduce hypothesis testing to construct performance metric and also the first to show that it is a feasible way besides homophily metrics to derive better metrics. We believe this new direction can help the community to develop metrics with better properties in the future.
### Q3:
On the definition of CSBM-H. According to the definition, $hd_0$ and $hd_1$ have to be integer. Also, possible values of $d_0$ and $d_1$ depend on the class sizes (the number of outgoing edges should be equal for two classes). This would not be necessary if the graphs are assumed to be directed, but this is not the case according to Section 2.
### R3:
In practice, $hd_0$ and $hd_1$ need to be interger values.
But we relax them to be continuous values in figures in Section 3.4 because we want to make the curves more readable and intuitive, especially to show the intersections of the 3 zones. We will add comments to clarify this point for CSBM-H to avoid unnecessary confusion in the revised version.
Thanks for bringing up the discussion of the "possible values of $d_0$ and $d_1$", here are our thoughts: If we impose undirected assumption in CSBM-H, we have to not only discuss
the node degree from intra-class edges, but also discuss degree from inter-class edges (as you said) and control their relations with the corresponding homophily level. This will inevitably add more parameters to CSBM-H and make the model much more complicated. However,
we find that this complication does not bring us extra benefit for understanding the effect of homophily, which deviate the main goal of our paper. And we guess this might be one of the reasons that the exisiting work mainly keep the discussion within the directed setting [1].
Actually, when CSMB-H was first designed, we would like to only have one "free parameter" $h$ in it to make it simple, because in this way, we are able to show the whole piciture of the effect of homophily from 0 to 1 like the figures in Section 3.4.
All in all, we find your suggestion interesting. We will add discussion in the revised version and encourage the GNN community to think about more CSMB-H variants with more complicated assumptions, e.g. with different class homophily, different node local homophily distributions, different node degree and class variance distributions and the undirected assuption as you proposed, etc..
### Q4:
The explanation in lines 168-172 was unclear to me; a more detailed explanation would help.
### R4:
The decision boundary in [1] is defined as $P= \\{ x|w^Tx-w^T(\mu_0+\mu_1)/2 \\}$ where $w= (\mu_0-\mu_1)/||\mu_0-\mu_1||$ is a fixed parameter. This classifier only depends on $\mu_0, \mu_1$ and is fixed for different homophily levels $h$. However, as $h$ changes, the distributions of the two normal distributions will change and the "seperability" of the two normals will be different as well. Thus, the fixed classifier used in [1] is not qualified to quantify the node distinguishability of CSBM-H or any two-normal model with homophily.
Hope this explanation is helpful.
### Q5:
The analysis assumes a particular aggregation (random walk), and it is not clear whether conclusions may change for other aggregations.
### R5:
In Appendix G.6 (in supplementary material), we report the results for classifier-based performance metrics with symmetric renormalized affinity matrix and compare them with the existing metrics. We observed similar performance advantages as the random walks renormalized matrix.
The analysis in Section 3.2 and Section 3.3 will be the same. And for the ablation study in 3.4, we re-draw the figures for symmetric renormalized affinity matrix. We observed the similar results for the 3 curves and 3 zones, and thus the conclusions will remain the same. We will add the results and discussion for symmetric renormalized affinity matrix to the revised version.
Besides, as mentioned in line 94-95, random walk aggregation is commenly used in GNN community to study heterophily [1] and we keep consistent with the current literature in our paper.
### Q6:
... Label Informativeness ...
### R6:
Thanks for point it out. Please see the Author Rebuttal box for the results and comparisons of Label Informativeness and adjusted homophily.
[1] Is Homophily a Necessity for Graph Neural Networks?. In International Conference on Learning Representations.
---
Rebuttal Comment 1.1:
Title: Response to Reviewer 7wh2 Part (2/2)
Comment: ### Q7:
The pipeline in Section 4.1 seems overcomplicated. There are two statistical tests, and one is used to define Prop(GCN). It seems that here one can compute the proportion of nodes whose average intra-class node distance is smaller than the inter-class node distance. Would this work or statistical test on this step is necessary?
### R7:
Thanks for carefully going through this detail. The statistical test in computing Prop(GCN) is necessary to avoid noisy nodes. In practice, for lots of nodes, we observed that (intra-class node distance):(inter-class node distance) is approximately 1:1, especially when the labels are sparse when we use sampling method. This will not only cause instability of the outputs, but also result in false results sometimes. Thus, we don't want to take account these "marginal nodes" into the comparison of Prop values and we found that using another hypothesis test would help a lot. This is also consistent with our goal to test the "proportion of nodes whose intra-class node distance is $\textbf{significantly smaller}$ than inter-class node distance" as mentioned in line 299-300.
### Q8:
Definition of $h_k$ in (2): $Z_{v,k}$ does not depend on $u$, so it can be moved to the subscript of the sum.
### R8:
Thanks for your suggestion. We will modify it in the revised version.
### Q9:
L112: it is written that all homophily measures are feature-independent, but $H_\text{GE}$ depends on features.
### R9:
Thanks for your comment. We will modify it to "...almost all homophily metrics..." in the revised version.
### Q10:
Definition 1: notation $P(\textup{CL}_{\textup{Bayes}}(\textbf{x}))$ is not clear.
### R10:
Thanks for your suggestion. We will modify it to $P(\textup{CL}_{\textup{Bayes}}(\textbf{x}) | \textbf{x})$ in the revised version. Hope this notation can clear up your confusion.
### Q11:
The paper is hard to follow in several places.
### R11:
Could you please point out several places so that we can address your confusion directly? | Summary: This paper studies when Graph Neural Networks (GNNs) can help with node classification tasks. The authors first focus on a variant of the Contextual Stochastic Block Model (CSBM) and propose new metrics for node distinguishability based on this model. The authors carry out comprehensive experiments and empirically characterize the regimes when original, low-pass filtered and high-pass filtered features are more helpful. In addition, the authors propose two additional hypothesis testing based metrics for deciding if GNNs are helpful, and empirically verify the effectiveness of the new metrics over real data. The experiments show that the new metrics are more indicative of whether GNNs can lead to better classification accuracy over baseline graph-agnostic models (i.e. MLPs).
Strengths: - The effect of graph homophily/heterophily to the performance of GNNs is an important topic which has received a lot of interests. This paper provides a more comprehensive perspective to the subject.
- The experiments are carefully designed and reasonably comprehensive. The empirical results provide a good support to the main claims of the paper.
- Overall, the paper is well organized.
Weaknesses: - I think the overall writing can be improved. I did not feel excited when reading the paper up to and including page 5. This is just my feeling and it does not mean the paper is not good. Moreover, I find myself sometimes lost focus when reading the first 5 pages. Maybe adding 1-2 sentences describing the emphasis of each section at the beginning of the section can be helpful.
- It seems to me that Section 3 and Section 4 study completely different things and under different settings. Although they are related. I think the results of Section 3 and Section 4 can be considered as two separate contributions of the paper. It might be a good idea to make this clear from the beginning. Currently, this gap between the two sections is not clear from neither the abstract nor the introduction.
- There are a few typos in the paper. For example:
- Line 18: it significantly -> it is significantly
- Line 112, the authors claim that "the current homophily metrics are all ... feature independent", however, the generalized edge homophily is feature dependent.
- Line 148: inner-class ND -> intra-class ND. Better be consistent since you used intra-class everywhere else.
- Most figures involving PBE, e.g. Figure 2a, 3a, have a different naming in the legend than others. Better to have a consistent naming in figure legend.
Technical Quality: 4 excellent
Clarity: 3 good
Questions for Authors: - For Figure 2, since d_0 = d_1 = 5, how did you obtain the plot at h=0.5? It's fine to apply some smooth interpolation but the middle point is important since it is the peak for LP. Maybe d_0 = d_1 = 10 is a better choice for the base setting.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 3 good
Contribution: 3 good
Limitations: I could not find where the authors addressed the limitations of this work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ### Q1:
I think the overall writing can be improved. I did not feel excited when reading the paper up to and including page 5. This is just my feeling and it does not mean the paper is not good. Moreover, I find myself sometimes lost focus when reading the first 5 pages. Maybe adding 1-2 sentences describing the emphasis of each section at the beginning of the section can be helpful.
### R1:
Thanks for your helpful suggestions. We will modify our paper to make it more reader-friendly.
### Q2:
It seems to me that Section 3 and Section 4 study completely different things and under different settings. Although they are related. I think the results of Section 3 and Section 4 can be considered as two separate contributions of the paper. It might be a good idea to make this clear from the beginning. Currently, this gap between the two sections is not clear from neither the abstract nor the introduction.
### R2:
Thanks for your suggestion. This is a very important and implementable feedback. We will elaborate the relation between the contributions of Section 3 and Section 4 in abstract and introduction.
Both Section 3 and 4 are motivated by the same principle that node distinguishability is related to "intra-class 'distance v.s. inter- class 'distance'". Section 3 directly studies this principle with CSBM-H, which is a toy example. Section 4 verifies whether this principle really relates to the performance of GNNs and derives new performance metrics based on it.
### Q3:
There are a few typos in the paper.
### R3:
Thanks for carefully going through our paper and point out those typos. We have corrected them in the revised version.
### Q4:
Line 112, the authors claim that "the current homophily metrics are all ... feature independent", however, the generalized edge homophily is feature dependent.
### R4:
Thanks for your suggestion. We will modify it to "...almost all homophily metrics..." in the revised version.
### Q5:
For Figure 2, since d_0 = d_1 = 5, how did you obtain the plot at h=0.5? It's fine to apply some smooth interpolation but the middle point is important since it is the peak for LP. Maybe d_0 = d_1 = 10 is a better choice for the base setting.
### R5:
Thanks for your suggestion. We relax $hd_0$ and $hd_1$ to be continuous values so that the curves in figures in Section 3.4 are more readable and intuitive, especially to show the intersections of the 3 zones. We will add comments to clarify this point for CSBM-H to avoid unnecessary confusion. We will try $d_0 = d_1 = 10$ setting as you suggest in the revised version.
---
Rebuttal Comment 1.1:
Comment: Thank you for the responses. My questions have been addressed. Overall I think this is a good paper.
---
Reply to Comment 1.1.1:
Title: Thank you
Comment: We appreciate your recognition and strong support to our paper. Your constructive suggestions make our paper better. | Summary: The authors analyze the node distinguishability in attributed graphs under the prism of homophily. They consider the binary CSBM model. They compute the classification error rate on the original features and of low- and high-frequency filters; they study the influence of inter- and intra-class variance on their performances. The authors propose new metrics based on these variances and on the comparison between GNN and NN.
Strengths: I am not very familiar with the literature needed to assess the originality of this article.
It is well written.
Weaknesses: For me it seems the article mainly studies the separability of a binary Gaussian mixture. The principle it derives (l. 347, whether intra-class node embedding "distance" is smaller than inter-class node embedding "distance") just means the two Gaussians do not overlap.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: A few remarks:
L. 75 are the features in R or R^F? and l. 154, in F_h?
L. 157 what is FP? In eq. 3 the authors could remind what are the FP and LP filters among the many quantities.
Theorem 1: I would not call it the optimal Bayes classifier since it is restricted to the features only; it does not take in account the graph of the CSBM. Also, this is just the optimal classifier for a binary Gaussian mixture; maybe it is not worth a theorem.
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 2 fair
Limitations: The authors did not discuss possible limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ### Q1:
For me it seems the article mainly studies the separability of a binary Gaussian mixture. The principle it derives (l. 347, whether intra-class node embedding "distance" is smaller than inter-class node embedding "distance") just means the two Gaussians do not overlap.
### R1:
Thanks for carefully going through our paper, but we think you might oversimplify our results and would like to clarify the contributions of our paper. The main goal of Section 3 is not "studies the separability of a binary Gaussian mixture". Instead, we are actually interested in how the separability (node distinguishability) changes as homophily level $h$ changes from 0 to 1 and how to use a curve to show the whole picture. We are also interested in how the curves for different graph filters intersect with each other. Researchers are intrigued by these topics, but these problems remain underexplored and not well-understood in the current GNN community.
Also, the simplified two-normal setting is widely used to study various tasks on graphs, including classification on heterophilic graphs [2], as mentioned at the beginning in section 3.2.
In addition, rather than derived from the two-normal setting, the principle is motivated from the example in Figure 1 as stated in Section 3.1. We use CSBM-H to formulate the principle as stated in line 134.
Furthermore, the two-normal setting is a toy example to study the principle intuitively and the principle does not just mean "two Gaussians do not overlap". In fact, its importance lays in that it provides a new way to develop metrics beyond homophily values, e.g. classifier-based performance metrics, to quantify node distinguishability (ND). Those metrics are verified to be better than homophily values, however, this principle has never been studied for understanding the effect of homophily. Our paper fills the gap and we believe it can provide new tools for researchers to explore heterophily in the future. Thus, we hope you can re-evaluate the contribution and novelty of our paper.
### Q2:
L. 75 are the features in R or R^F? and l. 154, in F_h?
### R2:
We use $R^F$ because section 2 just gives a general introduction of graph. But from your feedback, we found that this might cause some unnecessary confusion. Thus, we decide to use $R^{F_h}$ in the revised version.
### Q3:
L. 157 what is FP? In eq. 3 the authors could remind what are the FP and LP filters among the many quantities.
### R3:
FP is for Full-pass. Thanks for the suggestion and we will add explaination to FP, LP and HP before equation 3 in the revised version.
### Q4:
Theorem 1: I would not call it the optimal Bayes classifier since it is restricted to the features only; it does not take in account the graph of the CSBM. Also, this is just the optimal classifier for a binary Gaussian mixture; maybe it is not worth a theorem.
### R4:
Thanks for the suggestion. As mentioned in line 163-165, the theorem is about $x$ (feature-only), but the results are applicable to $h$ and $h^{HP}$ (i.e. the graph information of the CSBM-H is included in $h$ and $h^{HP}$) when the parameters are replaced according to Equation 3. Thus, the optimality will be kept for graphs with different homophily levels and with different filters. We will emphasize this sentence in case other readers miss it. About the naming problem, please refer to [1].
The result from theorem 1 is important to quantify the node distinguishability of CSBM-H and is firstly proposed to study homophily. When people try to study more complicated variants of CSBM-H in the future, theorem 1 is also a crucial tool. Thus, we think it's worth a theorem.
[1] https://en.wikipedia.org/wiki/Bayes_classifier
[2] Is Homophily a Necessity for Graph Neural Networks?. In International Conference on Learning Representations.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for their detailed answers. I am still skeptical about the contribution this article brings. The presentation is easy to follow and the setting allows to derive clear and understandable conclusions; but they seem limited and too simple. Figs. 2 to 5 depict the separability of two Gaussians that are more or less mixed by different filters that depend on $h$.
> its importance lays in that it provides a new way to develop metrics beyond homophily values, e.g. classifier-based performance metrics, to quantify node distinguishability (ND).
If I am wright, homophily was introduced to explain why GNNs perform badly on some datasets. I may simplify too much (again, I am not very familiar with this topic): it seems that this new classifier-based metric trivially means that GNNs have difficulties where (simple) GNNs are bad.
---
Reply to Comment 1.1.1:
Title: Response to Reviewer GCuM
Comment: Thanks for your reply and here is our response to your concerns of novelty.
### Q1.
I thank the authors for their detailed answers. I am still skeptical about the contribution this article brings. The presentation is easy to follow and the setting allows to derive clear and understandable conclusions; but they seem limited and too simple. Figs. 2 to 5 depict the separability of two Gaussians that are more or less mixed by different filters that depend on $h$.
### R1:
As we mentioned in our reply, the two-Gaissian setting is a commonly used tool and toy example to study the complex phenomenon of homophily and heterophily, which is widely used in heterophily community. We never try to oversimplify the setting to get easier results or take any advantage of it compared with other published literature. We just try to be consistent with the tool that are developed in heterophily community.
In addition, just like any other machine learning method whose direct computation is intractable or infeasible, the simplification is sometimes necessary and should not decrease its importance. The merit of two-normal setting is that its node distinguishability (ND) can be explicitly quantified. And through the ND, we discovered the 3 zones, how the 3 zones and 3 curves change as homophily level changes and how class variances and node degree impact the 3 zones. These discoveries are all important, innovative, and of high interest in heterophily community, and their importance cannot be impaired by its simplicity.
Besides, in our paper, we do not limit our analysis in two-normal setting. In section 3.5, we extend the analysis to more general settings and find some similar conclusions. But unfortunately, the ND or seperability of this general setting cannot be explicitly quantified (if this is what you expect).
Discussing the variants of CSBM-H with more complicated assumptions might be doable, e.g. with different class homophily, different node local homophily distributions, different node degree and class variance distributions and the undirected message passing, etc.. We will add discussion in the revised version and encourage the GNN community to think about the more complicated setting.
Based on the above points, we don't think the contribution of our paper can be denied because of simplicity.
### Q2.
If I am wright, homophily was introduced to explain why GNNs perform badly on some datasets. I may simplify too much (again, I am not very familiar with this topic): it seems that this new classifier-based metric trivially means that GNNs have difficulties where (simple) GNNs are bad.
### R2:
Homophily was not introduced to explain "why GNNs perform badly", it talks about why GNNs are worse (compared to NNs). To clarify it, let me first introduce the history of this line of research briefly.
As we stated in our paper, graph-aware models (GNNs) differ from graph-agnostic models (NNs) because it has an additional feature aggregation step in each layer. This extra aggregation step sometimes brings benefit which gives us better performance, but sometimes brings harm and damage which gives us worse performance. People want to use GNNs (with feature aggregation) on those "good" graphs and avoid GNNs on "bad" graphs and we need an easy method to categorize them.
In 2020, people started looking for the reason of the harm. At that time, they believed heterophily is the answer and tried to use (edge or node) homophily values to categorize the "good" and "bad" graphs [1]. In 2021, people questioned the above conclusion and found that homophily is not a necessity for better performance and heterophily is not always worse [2]. In 2022, people tried to find out better homophily metrics to differentiate "good" and "bad" graphs [3], but that metric is linear, feature-independent and cannot provide an accurate threshold value with statistical meaning for the categorization. The classifier-based metric proposed in our paper tried to address the above problem and is verified to give more accurate threshold values with statistical significance. Finding when GNNs has advantage and disadvantage against NNs is far from trivial.
I hope this explanation can clarify the importance of the proposed metric and help you better understand this line of research. We will add this explanation to the revised version if you find it necessarry.
[1] Beyond homophily in graph neural networks: Current limitations and effective designs. Advances in neural information processing systems, 33, 7793-7804.
[2] Is Homophily a Necessity for Graph Neural Networks? International Conference on Learning Representations. 2021.
[3] Revisiting heterophily for graph neural networks. Advances in neural information processing systems, 35, 1362-1375. | Summary: Recent research indicates that Graph Neural Networks (GNNs) maintain their advantage even in the absence of homophily, as long as nodes from the same class exhibit similar neighborhood patterns. This argument, however, primarily considers intra-class Node Distinguishability (ND) while overlooking inter-class ND, thus providing an incomplete understanding of homophily. Therefore, the authors in this paper propose that an ideal ND scenario entails smaller intra-class ND relative to inter-class ND. To substantiate this, the authors introduce the Contextual Stochastic Block Model for Homophily (CSBM-H) and define two ND metrics: Probabilistic Bayes Error (PBE) and negative generalized Jeffreys divergence. Experimental results reveal that the supremacy of GNNs is indeed closely tied to both intra- and inter-class ND, irrespective of homophily levels.
Strengths:
1. The paper addresses a highly significant issue.
2. The proposed CSBM-H and the defined metrics illustrate that the superiority of GNNs is indeed closely linked with both intra- and inter-class ND, regardless of homophily levels. The results are interesting.
3. The paper is solidly grounded in its field.
Weaknesses:
N/A
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors:
N/A
Confidence: 1: Your assessment is an educated guess. The submission is not in your area or the submission was difficult to understand. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 4 excellent
Limitations: While I do not consider myself an expert in this area, I appreciate the content of this paper. It addresses a highly significant issue, and proposes that an ideal ND scenario would have smaller intra-class ND compared to inter-class ND. To formulate this idea, the authors introduce the Contextual Stochastic Block Model for Homophily (CSBM-H) and define two ND metrics: Probabilistic Bayes Error (PBE) and negative generalized Jeffreys divergence. The proof is robust and the paper appears to be strongly grounded in the subject matter.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks so much for your nice review and strong recognition to our contributions. | Rebuttal 1:
Rebuttal: In this part, we will provide experimental results of Label Informativeness (LI) and adjusted homophily ($H_\text{adj}$) on small- and large-scale datasets and compare them with the proposed metrics. Before discussing the results, we would like to introduce 2 threshold values for classifier-based performance metrics (you can find it in Appendix G.2)
Typically, the threshold for homophily and heterophily graphs is set at 0.5. For classifier-based performance metrics, we establish two benchmark thresholds as below,
• Normal Threshold 0.5 (NT0.5): Although not indicating statistical significance, we are still comfortable to set 0.5 as a loose threshold. A value exceeding 0.5 suggests that the G-aware model is not very likely to underperform their coupled G-agnostic model on the tested graph and vice versa. (Error cases with regard to NT0.5 are marked by grey)
• Statistical Significant Threshold 0.05 (SST0.05): Instead of offering an ambiguous statistical interpretation, SST0.05 will provide a clear statistical meaning. A value smaller than 0.05 implies that the G-aware model significantly underperforms their coupled G-agnostic model and a value greater than 0.95 suggests a high likelihood of G-aware model outperforming their coupled G-agnostic model. Besides, a value ranging from 0.05 to 0.95 indicates no significant performance distinction between G-aware model and its G-agnostic model. SST0.05 is a more strict threshold than NT0.5. Therefore, we will observe more error cases under SST0.05 than NT0.5. (Error cases with regard to SST0.05 are marked by red)
We can see that, no matter on small- or large-scale datasets, the classifier-based performance metrics (CPMs) are significantly better than the existing metrics on revealing the advantages and disadvantages of GNNs, decreasing the overall error rate from at least 0.34 to 0.13 (at most 0.19 under SST0.05).
Pdf: /pdf/873be37df272d61e4643724f3edb0d986b834474.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: The paper delves deeper into the topic of GNNs' superiority in node classification tasks. The authors conduct experiments to show that GNNs do better due to inter and intra class node distinguishability regardless of homophily levels as suggest by current research. They propose a new metric that sheds more light on how GNNs work.
Strengths: GNNs are a hot topic and studying how they work and what settings are conducive for GNNs to be more effective is important. This paper does that and proposes a new metric to that end.
Weaknesses: 1. Since the crux of the paper is about node distinguishability (ND), it would be nice to see inter and intra ND visualizations of the datasets used similar to Figure 1 (toy example).
2. I'm not sure the claim "superiority of GNNs is indeed closely related to both intra- and inter-class ND regardless of homophily levels" is sufficiently backed by the evidence in Table 1. For example, setting a different threshold than 0.5 for homophily metrics could make them look better.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. Anything special about the PubMed dataset causing incorrect guesses? Also, Chameleon and Squirrel datasets have more incorrect guesses than other "less homophilic" datasets. Any way to visualize ND to pin point where the difference is coming from?
2. Any comment on the time complexity trade-offs between homophily metrics and classifier based ones?
Minor:
line 18: "it is significantly" instead of "it significantly"
line 321: "split" instead of "splits"
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ### Q1:
Since the crux of the paper is about node distinguishability (ND), it would be nice to see inter and intra ND visualizations of the datasets used similar to Figure 1 (toy example).
### R1:
Thanks for your suggestion. Actually, we do include the option to visualize CSBM-H at different homophily levels in our code in thesupplementary material. We will add some visualization in the revised version.
### Q2:
I'm not sure the claim "superiority of GNNs is indeed closely related to both intra- and inter-class ND regardless of homophily levels" is sufficiently backed by the evidence in Table 1. For example, setting a different threshold than 0.5 for homophily metrics could make them look better.
### R2:
This is a good question. For the threshold problem, we would like to emphasize the following points: 1. 0.5 is a commenly used threshold value in homophily/heterophily community to seperate homophilic and heterophilic graphs. It is not a randomly picked value and it has mathematical meaning: homophily, i.e. metric greater than 0.5, means the proportion of edges that connect nodes from the same class is larger than that from different classes in some way, and vice versa. Other threshold should have statistical or mathematical meaning and we cannot arbitrarily choose one. This property is also one of the advantages of our proposed metrics over the existing metrics. 2. We cannot manually pick different threshold values for different homophily metrics to make them "look better" on certain set of graphs. This will cause "overfitting" problem, i.e. when you use this cherry-picked threshold on other unseen datasets, it might perform badly.
Thus, our claim is properly backed by the results from Tables 1. Besides, we provide additional experimental evidence in Appendix G to support our claim.
### Q3:
Anything special about the PubMed dataset causing incorrect guesses? Also, Chameleon and Squirrel datasets have more incorrect guesses than other "less homophilic" datasets. Any way to visualize ND to pin point where the difference is coming from?
### R3:
Thanks for going through Table 1 carefully and this is a very good question. From the experimental results on large-scale datasets reported in Table 4 in Appendix G.4, we observe that, for linear and non-linear G-aware models, there exists inconsistency between their comparison with their coupled G-agnostics models. For example, for $\textit{Penn94, pokec, snap-patents}$ and $\textit{twitch-gamers}$, SGC-1 underperforms MLP-1 but GCN outperforms MLP-2. In fact, $\textit{PubMed}$ also belongs to this family of datasets. We do not have a proved theory to explain this phenomenon for now. But there is obviously a synergy between homophily/heterophily and non-linearity that cause this discrepancy together. And we think on this spectial subset of heterophilic graphs, we shoud develop theoretical analysis to discuss the interplay between graph structure and feature non-linearity, and how they affect node distinguishability together.
The current homophily values (including the proposed metrics) are not able to explain the phenomenon associated with this group of datasets.
We keep it as an open question and encourage people from the GNN community to study it in the future. We will add the above discussion to the revised version of this paper.
The visualization of ND for Chameleon and Squirrel is a good point. We will try to add this to the revised version.
### Q4:
Any comment on the time complexity trade-offs between homophily metrics and classifier based ones?
### R4:
In Table 9 in Appendix G.5 (in supplementary material), we provide the running time of CPMs on small- and large-scale datasets. The running time of CPM is short - it takes several minutes with 1 NVIDIA V100 even on large-scale datasets such as pokec and snap-patents, which contain millons of nodes and tens of millions of edges. In comparison, training GCN on these datasets will take several hours. It takes even days for SOTA models, e.g. ACM-GCNs, to train on them.
---
Rebuttal Comment 1.1:
Title: Thank you for the response
Comment: Thank you for the thorough response. I changed the score to 7
---
Reply to Comment 1.1.1:
Title: Thanks you
Comment: Thanks so much for your timely response and positive feedback. | null | null | null | null | null | null |
On the Identifiability of Sparse ICA without Assuming Non-Gaussianity | Accept (poster) | Summary: Aiming to address rotational invariance of Gaussian sources, this work first proposed an ICA identifiability theory based on Structural Variability. It then proposed two methods based on sparsity regularization and continuous constrained optimization to estimate the mixing matrix. It also made connections between ICA and causal discovery. It finally showed preliminary results to validate the proposed theory and estimation methods.
Strengths: 1. The proposed method could be potentially useful as it claims to identify linear Gaussian sources with a weaker assumption compared to structural sparsity.
2. The notations, theorems and proofs are clear in general.
3. The proofs or explanations in the Appendix are very detailed and helpful.
4. The author(s) conducted experiments to validate their theory and showed the effectiveness of the proposed SparseICA method in two simulated datasets.
Weaknesses: 1. As the author(s) pointed out, my main concern is that there is no sufficient empirical evaluation to demonstrate the effectiveness and generalizability of the proposed theory. Current experiments only used simulated data in two settings. How does the proposed theory work on real-world datasets? This work would be more convincing if there were experiments on real-world datasets.
2. Additionally, the author(s) argued that the theory proposed by Zheng et al. 2022 couldn't capture hierarchical structures (lines 39 - 41), but it doesn't seem that the authors performed experiments to identify Gaussian sources which are organized in a hierarchical manner.
3. I don't feel all theorems/assumptions/examples were explained very clearly, but I also note that the author(s) provided detailed proofs or explanations for each proposed theorem/assumption/example in the Appendix. So I would suggest that the author(s) try to organize the manuscript more logically and guide readers to the corresponding Appendix section in the main text.
4. There are a lot of typos in the present manuscript. For example, line 71: $j$-th column by $a_{:, j}$; line 104: Section 3.2; line 248: there exists; line 254: repeated "with". Please proof-read the manuscript prior to submission.
5. There is no statistical test to compare results.
6. There is no code provided to replicate the results. Please consider making the code publicly available.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: 1. Example 1: I understand the derivation in this example but I wonder how to identify such constraints. Do we have to derive these hard or non-hard constraints for each matrix? Are there any general principles to derive such constraints? If so, please briefly explain the principles. If not, it doesn't seem to be computationally efficient to derive these constraints as the dimension of A increases.
2. Line 130: Please elaborate what "sufficiently diverse effect" means exactly.
3. Do you have any explanations about the performance difference from two estimation methods in Figure 1? For example, decomposition method seems worse than likelihood method in vanilla case.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 2 fair
Limitations: See Weaknesses.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely thank the reviewer for the time dedicated to reviewing our paper and the constructive suggestions. Our responses to these comments are given below.
**Q1: "no sufficient empirical evaluation" and "how does the proposed theory work on real-world datasets?"**
A1: See our response to Q2 in the general response.
**Q2: "it doesn't seem that the authors performed experiments to identify Gaussian sources which are organized in a hierarchical manner".**
A2: Thank you for pointing this out. Given your comment, we will remove that part in Lines 39-41 to avoid possible misunderstanding. This is because we are focusing on the ICA task that is not directly related to hierarchical structures (though indirect connection may exist).
As indicated by Theorems 2 and 3, the structural conditions by Zheng et al. (2020) are somewhat restrictive because they cannot handle cases where the set of observed variables influenced by one source is a superset of those affected by another source. Our proposed approach, on the other hand, is considerably less restrictive and introduces more flexibility, thereby enhancing the applicability with Gaussian sources. We will further clarify these points in the revised version to improve the clarity of our work.
**Q3: "organize the manuscript more logically and guide readers to the corresponding Appendix section in the main text".**
A3: Thanks for the helpful suggestion which helps improve the presentation of the paper. In the revision, we will include the 'links' in the main text to guide readers to the relevant Appendix section, especially for the proofs. Furthermore, we will organize the manuscript, such as the appendices, more logically to improve the presentation.
**Q4: Typos.**
A4: Thank you for your careful reading. We will carefully proofread the manuscript to fix the typos in the revision.
**Q5: "There is no statistical test to compare results."**
A5: Thanks for pointing this out. In light of your comment, we have additionally applied Wilcoxon signed-rank test (at $5$% significance level), and found that the improvements of the proposed method, e.g., in Figure 1, are statistically significant. We will include these statistical tests in the final version of the paper.
**Q6: "There is no code provided to replicate the results. Please consider making the code publicly available."**
A6: Thanks for this comment. The link to the code has been provided in Lines 862-863 in Appendix D. In light of your comment, we will move the link to Section 5 in the revised paper.
**Q7: "Do we have to derive these hard or non-hard constraints for each matrix? Are there any general principles to derive such constraints?"**
A7: We sincerely appreciate this insightful question. The examples of hard and non-hard constraints (Example 1) serve as illustration purposes for different types of constraints. In practice, during estimation (i.e., Algorithms 1 and 2), we do not have to derive any of these hard and non-hard constraints. The only place involving these constraints is Assumption 3, which requires that the hard constraints (if exist) of the covariance matrix arise from the support of the mixing matrix; even in this case, we do not have to derive these hard constraints. In the revision, we will include a discussion in Section 3 to make this clear.
**Q8: "Please elaborate what 'sufficiently diverse effect' means exactly."**
A8: Thanks for this suggestion. "Sufficiently diverse effect" means that the conditional distribution of sources given the auxiliary variable must vary sufficiently with the auxiliary variable and thus is more complex. This is often expressed in terms of the first-order and second-order derivatives of the conditional distribution; see the precise definition in Hyvärinen et al., (2019, Theorem 1). We will provide a detailed explanation in the revision.
**Q9: "Do you have any explanations about the performance difference from two estimation methods in Figure 1?"**
A9: Thanks for the thoughtful question. A possible reason is that the decomposition-based method involves additional constraint (Eq. (8)) as compared to likelihood-based method (Eq. (9)). Therefore, the resulting optimization problem of decomposition-based method might be harder to solve and contain suboptimal local solutions. We will include this explanation in Section 5 of the revision.
**References:**
A. Hyvärinen, H. Sasaki, and R. Turner. Nonlinear ICA using auxiliary variables and generalized contrastive learning. In International Conference on Artificial Intelligence and Statistics, 2019.
---
Rebuttal Comment 1.1:
Title: A Kind Request for Further Feedback
Comment: Thanks again for taking the time to review our work. We have carefully considered your comments and provided responses to them. Since the discussion period will end in a few days, we would like to kindly request for further feedback. Could you please check whether the responses properly addressed your concern? Thank you very much.
---
Rebuttal Comment 1.2:
Title: Looking forward to your kind feedback
Comment: Dear Reviewer 6AWD,
We are writing to kindly let you know that we have been eagerly waiting for your feedback on our rebuttal, despite your busy schedule. Since the discussion period will end on Monday, we hope for the opportunity to respond to your further comments or questions, if there are any. Any feedback would be appreciated.
Thanks once again,
Authors of #8190
---
Reply to Comment 1.2.1:
Title: A Kind Request for Further Feedback
Comment: Dear Reviewer 6AWD,
We apologize for sending multiple reminders. Since the discussion period will end in two hours, we are very eager to get your feedback on our response. We understand that you are very busy. We would highly appreciate it if you could take into account our point-by-point response when updating the rating and having discussion with AC and other reviewers.
Thanks for your time,
Authors of #8190 | Summary: This paper provides theorems under which the mixing matrix of linear ICA with Gaussian sources can be identified. While Gaussian sources are known to be unidentifiable in the classical ica theory, identifiability is possible if certain sparse structure is assumed for the mixing matrix as was initailly shown in Zheng. The assumptions in Zheng were restrictive however. This work expands the work of Zheng to provide more general conditions, namely:
- assumption of structural provided is much less restrictive on the sparsity pattern
- necessity of this assumption is proven (under the problem considered here)
- connection of this approach to causal discovery is shown
- the theorems lead to new estimation methods based on second-order statistics that allow ICA to be applied on Gaussian sources
Strengths: Strengths:
- the paper is in general well written and of good quality
- the assumptions on sparsity are more much more reasonable than in previous works and should allow for future works in this area. To this end theorems 2 and 3 provide relevant and useful comparison
- theoretical assumptions are also often nicely illustrated with examples
- some nice approach of transforming the theorems into algorithms in a justified manner (e.g. framing the search space of A; theorem 6)
- connection between ica and causal discovery is well known and this paper further contributes to that similarity
- addition of a necessary condition is a nice result and helps to understand the limits of this approach
Weaknesses: Weaknesses:
The biggest conceptual problem I have is that the identifiability here is framed in terms of an optimization problem where the ground-truth covariance matrix is assumed. I find this idea hard to follow since identifiability should be a property of the data model. Typically the process is to assume that we have $\log p(x; \theta) = \log p(x; \hat{\theta})$ and then show that this implies $\theta \sim \hat{\theta}$ -- this approach makes sense as the starting point, equality of likelihoods, can be justified on the basis of MLE and its guarantees (at least in theory). Here instead one seems to start with the assumption that $AA' = \hat{A}\hat{A}'$. But the justification for this starting point is missing -- you should write it out fully. Also, in practice the empirical covariance matrix is used (eq. 8) but what are the guarantees here? I'm quite happy to revise my score upwards once this, and below, are discussed.
I think the practical motivations for this paper are lacking and weak, perhaps partly due to a bit of unclear writing. For example, the authors write that "many biological traits ... are often normally distributed" to justify the importance of doing ica on Gaussian latents. But those listed biological signals are *not* typically latent, but often observed (after noise) and can usually be handled by standard ICA as there is no problem of Gaussian observations. Of course, it is possible to have latent biological signals too but that should be explained more carefully. It should be stressed however that the work still has important contribution in terms of fundamental research -- practical appeal does not always need to be obvious, so this is not necessarily a big problem. Authors also write that one possible situation in which their theorems are more useful than previous works, is when "an observed variable serves as a root cause". This may be possible but goes against the typical idea of ICA where observations are all assumed mixtures of independent latents. I feel this is however more easily understood in causal framwork. This leads to my next point...
I find in general that ideologically this work is in some sense difficult to place as it feels like the ideas are more relevant to causal discovery yet it's framed as an ICA paper. But the assumptions on sparsity can still be quite restrictive and a substantially sparse mixing matrix, again, goes against the whole concept of ICA in some sense -- we are not really solving the mixing matrix problem if the mixing that happens is quite limited. Despite the examples given, it is still hard to understand exactly how restrictive sparsity assumptions are *in practice*.
I find the estimation algorithm of the decomposition-based method in general a bit difficult to justify; optimally you would use an $l_o$ regularizer as this would agree with theory but that seems difficult to optimize. The likelihood-based method seems much more principled and I wonder if this paper could have been better framed around likelihood-based methodology in general.
I'm surprised by the poor experimental results -- typically linear ICA achieves 0.99 correlation to the ground-truths eg. with FastICA so I would have expected similar results here. There are also no experiments on real data (not necessarily a big problem due to the breadth of theory here).
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: More concerns and things I'd expect revised for me to improve my score:
line 16-17: the examples+reference here are pretty much straight form Hyvärinen (Independent Component Analysis: Algorithms and Applications, section 7). Perhaps with a bit more originality and effort some more varied, newer, examples or at least references could be provided?
"distribution of Gaussian sources, the sparsity of the mixing matrix undergoes noticeable changes." Could you clarify what is meant here?
Please include "links" to relevant proofs in the text. At the moment one has to wonder in the appendix to see whether the proofs exist.
329: "...based method (Eq. (8) or Eq. (9)) on data where both Assumptions 1 and 2 do not hold" The grammar could be polished as now it could mean "where assumption 1 and 2 do not hold at the same time" or that "neither assumptions 1 and 2 hold"
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Authors do indeed point out the limited applications to real world data. Authors also admit that " Since the true generating process of real-world data is inaccessible, it is challenging to quantitatively evaluate the applicability of these sparsity assumptions."
however there is no discussion on why the experimental results are not as good as one would expect, this would be welcome.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We greatly appreciate the reviewer's constructive comments, many of which will help improve the clarity of our paper. We have tried to address all the concerns in the following.
**Q1: Justification to start with $AA'=\tilde{A}\tilde{A}'$ is missing.**
A1: We sincerely appreciate this insightful comment, which helps improve the clarity of our theoretical results. We agree that the typical starting point of identifiability proof is "equality of likelihoods". In fact, this is exactly the tool used in our proof of likelihood method. In the large sample limit (as is the case for typical justification of MLE), one can show that $L(\mathbf{A};\bar{\mathbf{\Sigma}})=L(\tilde{\mathbf{A}};\bar{\mathbf{\Sigma}})$ implies $\mathbf{A}\mathbf{A}^\top=\tilde{\mathbf{A}}\tilde{\mathbf{A}}^\top$ (see Eq (9) for exact form of likelihood $L(\cdot)$).
In Theorems 1 and 6, we start with $\mathbf{A}\mathbf{A}^\top=\tilde{\mathbf{A}}\tilde{\mathbf{A}}^\top$ because this is the essential assumption used by the proof. This leaves the door open for different ways to achieve $\mathbf{A}\mathbf{A}^\top=\tilde{\mathbf{A}}\tilde{\mathbf{A}}^\top$, i.e., via decomposition or likelihood method, both of which are correct in the large sample limit. We will discuss this further in the revision.
**Q2: Guarantees for empirical covariance matrix.**
A2: See our response to Q1 in the general response.
**Q3: lack of practical motivations and "it is possible to have latent biological signals too but that should be explained more carefully".**
A3: Thanks for the suggestion. In light of it, we will modify the sentence
to emphasize that the real-world usage scenarios center on potential latent Gaussian sources. We will also include additional examples besides biology. For instance, the thermal noises in electronic circuits typically adhere to Gaussian distributions (Ott, 1988), of which their mixtures posing challenges to traditional separation techniques.
**Q4: "an observed variable serves as a root cause" and "goes against the typical idea of ICA".**
A4: Thank you for pointing this out. In light of your comment, we will remove that related part to avoid possible misunderstanding.
**Q5: "assumptions on sparsity can still be quite restrictive" and "it is still hard to understand exactly how restrictive sparsity assumptions are".**
A5: Thanks a lot for this thoughtful comment. We completely agree with you that our assumption may be violated in certain situations, and we do not expect our theory to apply to all scenarios. However, given that the problem has clear practical implications and that the sparsity assumptions are expected to hold true for certain mixing matrices, it seems essential to start this line of research, and to weaken the sparsity assumptions as much as possible. This extends the applicability of these approaches to cover more general mixing matrices. We also hope that this work will inspire alternative results to make gaussian sources identifiable.
It is worth noting that sparsity assumptions are particularly relevant when observations are influenced by sources in a "simple" manner, as also discussed in [33]. For instance, ecological, gene-regulatory, and metabolic systems in biology often exhibit sparse interactions (Busiello et al., 2017). Similarly, in physics, complex observed phenomena may often be governed by a relatively small set of fundamental laws, exemplified by Einstein's theory of special relativity (Einstein, 1905). In our revised paper, we will delve into these aspects further, providing a more comprehensive perspective on the practical implications of our theory.
**Q6: Frame the paper around likelihood-based methodology.**
A6: Thanks for this constructive suggestion. We agree that optimally we would use an $\ell_0$ regularizer for decomposition-based method. This partly explains why likelihood method performs slightly better than decomposition method. Following your suggestion, we will restructure Section 4.2 to place more emphasis on the likelihood method.
**Q7: "no experiments on real data".**
A7: See our response to Q2 in the general response.
**Q8: Provide more original, varied, newer examples/references.**
A8: Beside examples in Lines 16-17, we will include biology (Teschendorff et al., 2007; Biton et al., 2014), astronomy (Nuzillard et al., 2000; Akutsu et al., 2020), and earth science (Kaplan, 2003; Moulin et al., 2022) in the revision.
**Q9: Clarify "sparsity of the mixing matrix undergoes noticeable changes".**
A9: This means that, after rotation of the mixing matrix, the support of the mixing matrix may be changed, leading to a denser mixing matrix, although the resulting distribution of the observed variables remains unchanged. Similar intuition is explained in Lines 172-174. We will clarify this further in the revision.
**Q10: Include links to proofs.**
A10: We will include the links in main texts to guide readers to the relevant proofs in the Appendix sections.
**Q11: Polish the grammar of "329: ...based method ... do not hold".**
A11: We will modify the phrase to "where neither Assumption 1 nor Assumption 2 holds".
**Q12: "typically linear ICA achieves 0.99 correlation" and "no discussion on why the experimental results are not as good as one would expect".**
A12: Thanks for bringing up this question and your insightful observation. Since the experiments involve Gaussian sources, FastICA does not perform well because it is based on non-Gaussianity. For our method that can handle Gaussian sources, the optimization may return suboptimal local solutions, so the experimental results are not as good as one would expect (despite still outperforming FastICA). This further demonstrates the difficulties posed by Gaussian sources due to rotational invariance, and indicates that further research could enhance the performance. We will discuss this in the revision.
---
Rebuttal Comment 1.1:
Title: A Kind Request for Further Feedback
Comment: Thanks again for your time and comments. We have provided responses to your comments. Would you mind checking whether they properly addressed your concerns, or if you have further comments? Your feedback would be appreciated.
---
Rebuttal Comment 1.2:
Comment: I have re-read the paper and the other reviews, comments, rebuttals etc and have reconsidered my opinion.
While my opinion of the paper is improved I am still also not fully convinced whether the paper quite merits acceptance. The theoretical work is very good and it does indeed relax conditions compared to previous work, but it I don't find it significant enough on its own as the conditions are still quite strong and restrictive and build quite clearly on previous ideas. I would thus expect more strong empirical results to really show me that "here we have a real-world problem and as you can see Zheng's approach fails, as do typical ICA approaches, but our model does much better". You do mention you are running some tests on Richard et al. experiments but I find those impossible to judge, as I have not seen them. **In conclusion: optimally, I would recommend the authors resubmit with more convincing empirical, real-world, results as I think it would make the paper a lot stronger but I will still engage in further discussion with other reviewers as to me this paper is very much on the threshold.**
p.s. minor thing:
**Q1** I think you missed my point here (and perhaps it was unclear on my part). I understand well where these are coming from e.g. as you say . *"In the large sample limit (as is the case for typical justification of MLE), one can show that $L(A)=L(\hat{A})$ implies $AA^T = \hat{A}^T \hat{A}$."* All I am saying you should probably say this more explicitly in the main text. This would make the connection to typical identifiability theorems explicit to readers.
---
Reply to Comment 1.2.1:
Comment: We thank the reviewer for reading our response and for the further comment. We are glad that your opinion of the paper is improved, and you acknowledge that the theoretical work is very good.
Regarding "build quite clearly on previous ideas", while the tool of sparsity is inspired by Zheng et al. (2022), we note that the technical development is entirely different. Our theory/proof leverages the distributional constraints of covariance matrices and effects of support rotations, thereby allowing us to relax the conditions. Such development and techniques were not seen in Zheng et al. (2022) (as well as the typical literature of ICA).
Regarding "the conditions are still quite strong and restrictive", **this is precisely the motivation of our work**. We view it as our duty as researchers to progressively relax these conditions, thereby extending the applicability of the theory to cover more general mixing matrices. Due to the rotational invariance of the Gaussian distribution, ICA with Gaussian sources is a challenging ill-posed problem, and thus some assumptions are inevitably needed. As you acknowledged, we further showed that some of the assumptions are provably necessary, which shed light on the limit of these approaches.
We admit that our work is primarily centered around theory. While we agree that the potential application of our theory in various real-world tasks would be exciting, we note that our theory can only be rigorously validated by ablation studies (via simulated data), which we have done in Section 5. This is because, as you also noticed, it is impossible to ensure that the unknown ground-truth data generating process satisfies some assumptions or not. Therefore, real-world experiments cannot play the role of validating the theory.
Furthermore, it is worth noting that NeurIPS has traditionally published theory-focused papers if they offer relevant results and use nontrivial methods. We believe that both are the case.
Regarding Q1, thanks for the clarification and we will make this more explicit in the main text, following your suggestion. | Summary: This paper considers the problem of estimating a mixing matrix $A^*$, given measuremnts $x = A^* s$. The authors show that under Gaussian $s$, some assumptions on $A$, and using infinite $x$, the sparsest $A$ that satisfies $AA^{T} = A^* A^{* T}$ recovers $A^*$ upto permutation and sign of the columns.
The authors also show connections to causal discovery in the case of structural equation models.
Since the sparse norm is non-convex, the authors propose a convex relaxation using the $\ell_1$-norm that can be run in practice.
The sparsity assumptions on $A$ induce polynomial constraints $H(A)$ on the elements of the matrix $\Sigma = AA^T$. These constraints are typically hard to compute, but the authors define an auxilliary function $h(A) = \text{tr} ( \sum_{k=2}^{n} off(A) \odot off(A) )$, where $off(A)$ is the off-diagonal version of $A$, and $\odot$ is the Hadamard product. The constaint set now becomes $h(A) = 0$, and the authors show that convex solvers can work in this setting.
Minor:
Using $H$ for the set of hard constraints and $h$ for the trace of the defined matrix is slightly confusing, please change it.
Strengths: - The authors show that their assumptions on the matrix are strictly weaker than existing approaches (Section 3.3)
- The authors show connections to causal discovery, which I thought were interesting.
- Existing approaches cannot handle Gaussian sources $s$, as this induces rotational symmetry on $AA^T$, and hence we can only recover $A$ upto rotation. In contrast, the current work can recover it upto permutation and sign of the matrix.
- The authors are able to show that the convex relaxation in Section 4 recovers $A$ in the asymptotic limit of infinite samples of $x$.
- Experimental results are nice.
Weaknesses: - The identifiability results are for infinite samples of $x$.
- No runtime guarantees
- The authors should spend more time commenting on the technical quality of the results. In some sense, the extension from the non-convex program to the convex relaxation is ``expected''. What new technical tools were required over traditional sparsity?
- Experiments are a little simplistic, but that's fine for a theory paper.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: - I'm not an expert on causal discovery. Are the connections between ICA and identifiability of structural equations established results? Relatedly, can you explain your contributions over existing work.
- The authors should spend more time commenting on the technical quality of the results. In some sense, the extension from the non-convex program to the convex relaxation is ``expected''. What new technical tools were required over traditional sparsity? Without commenting on this, it would seem that choosing the right assumptions (1-3) are the key for this paper.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the positive feedback and time devoted to our work. Below we give a point-by-point response to the comments.
**Q1: "Using $H$ for the set of hard constraints and $h$ for the trace of the defined matrix is slightly confusing, please change it".**
A1: Thanks for this suggestion which helps improve the notations. We will change $h(\mathbf{A})$ to other notation such as $g(\mathbf{A})$ in the revision.
**Q2: "The identifiability results are for infinite samples".**
A2: See our response to Q1 in the general response.
**Q3: "No runtime guarantees".**
A3: Thanks for pointing this out. This is an excellent point. Here, we briefly discuss the overall runtime/complexity of the estimation method. For instance, considering the likelihood-based method (i.e., Algorithm 2) with L-BFGS, each inner iteration of the quadratic penalty method (corresponding to the L-BFGS run) has a computational complexity of $O(m^2 n^2 + m^3 + m n^2 t )$, where $m\ll n^2$ is the memory size of L-BFGS and $t$ is the number of iterations of L-BFGS. Typically, we have $t=250$ for each L-BFGS run, and $125$ iterations for the quadratic penalty method. In practice, for the experiments in Figure 1, the average runtime of the likelihood-based method is roughly $2$ minutes on CPUs. (It is worth noting that the runtime can be significantly shortened with GPU acceleration.) We will provide this discussion with further explanation in the revision.
**Q4: "Are the connections between ICA and identifiability of structural equations established results? Relatedly, can you explain your contributions over existing work."**
A4: Thanks for this insightful question, which helps make our contributions clearer. As acknowledged by Reviewer ouDH and discussed in Lines 201-206, the connection between ICA and identifiability of structural equations has been established based on non-Gaussianity. One of our contributions is to extend the connection between ICA and structural equations to the case of Gaussianity (i.e., second-order statistics), which further bridges the gap between these two fields. Specifically, we provide an analogy of score-based causal discovery method (based on sparsity and second-order statistics) in ICA, which is rather different from the known connection based on non-Gaussianity. We will provide a detailed discussion to make our contributions for this part clearer in the revision.
**Q5: "What new technical tools were required over traditional sparsity?"**
A5: We sincerely appreciate this thoughtful question. Apart from traditional sparsity, we completely agree with the reviewer that choosing the right assumptions 1 to 3 are important for the identifiability/identification. Furthermore, the characterization (i.e., Proposition 4) of matrices $\mathbf{A}$ satisfying Assumption 2 (see Eq. (3)) is also a key technical tool of the proposed identification method, because it allows us to formulate the problem as a continuous constrained optimization problem. We will include this discussion with further explanation in the final version.
**Q6: "Experiments are a little simplistic".**
A6: See our response to Q2 in the general response. | Summary: In this paper, the authors develop new identifiability conditions for ICA, based on sparsity constraints. These assumptions, in particular, do not require the sources to be non-Gaussian, as is usually the case in ICA.
Mostly, the authors take inspiration in previous work by Zheng et al, and weaken their main condition. Some assumption are shown to be necessary.
As is usual with sparsity-based analyses, the resulting optimization problem is quite non-trivial, here the main difficulty is to explore the space of directed adjacency matrices whose corresponding graph is a DAG. The authors provide some discussion and implementation procedures about this.
The proposed method is evaluated against FastICA on synthetic data. A (quick) ablation study is realized to examine the effect of the assumptions not being satisfied.
Strengths: - the contribution is interesting for the long-standing problem of non-Gaussian ICA. The authors make a commendable effort to discuss previous results, their main inspiration, and the strengthening of the results/weakening of assumptions
- the paper is well-written and pedagogical, with examples and discussion scattered throughout about the assumptions and the results
- some aspects of the paper are quite developed (relation with causal discovery, implementation, necessity of the assumptions...) without sacrificing the clarity of the writing despite the space constraints
Weaknesses: - the only assumption that less discussed than the other ones, in relation with previous work or its "realistic aspect" with respect to real data, seems to also be the "main" one, Assumption 2. The focus on hard constraints is theoretically understandable, but how realistic is it ? Also, the triple-presentation of the optimization problem is a bit redundant, the end problem being "just" optimizing among the space of DAGs
- the paper focuses on exact identifiability, when the true covariance matrix is "available". This can be confusing for people not expert on ICA, can the effect of sample size and covariance estimation be discussed ? Does the resulting noise has an effect on "admissible" sparsity level (empirically, let's say), and for now the sparsity level $\| A\|_0$ does not seem to be constrained in any way (beyond assumption 1 and 2)
- there is no discussion on computational complexity, it seems that the constraint $h(A)=0$ is very costly as it involves computing the $n$th power of an $n \times n$ matrix, at each iteration. Is there any simplification over brute force ?
- no evaluation on real data. When the assumptions are only approximately satisfied, and there is strong noise on the covariance, it is not clear what is the performance of the proposed method
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: See "weaknesses" above
Minor questions/typos:
- l22 : "Kurtois"
- l 95 : "statitsic"
- Assumption 3 is a bit unclear at first read (it seems like a definition), only one sense of the "if and only if" is important, perhaps it is possible to reformulate
- what is the exact sense of "asymptotically" in theorem 7 ? (see also question/comment about sample size, which is not discussed in the paper)
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The authors have discussed some aspects extensively, eg Assumption 1 with respect to previous work, other are a bit scarce, eg Assumption 2. Also, the computational complexity is not discussed, and there is no experiments on real data.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely thank the reviewer for the time devoted and the thoughtful comments. Please find the response to your comments and questions below.
**Q1: Lack of discussion about Assumption 2 and how realistic it is.**
A1: We sincerely appreciate this very helpful point. Since Assumption 2 allows independent column and row permutations, we believe that it is rather mild, especially for sparse mixing matrices. In other words, if the influences from sources to observed variables are sparse, then it is likely for Assumption 2 to hold. Such notion of sparse influences tend to apply when the observations are influenced by the sources in a "simple" manner, e.g., in several biological (Busiello et al., 2017) and physical (Einstein, 1905) systems. Furthermore, sparse influences also serve as the fundamental principle for various works in ICA (Zheng et al., 2022; Lachapelle et al., 2022), as well as other fields like causality (Spirtes et al., 2001; Raskutti et al., 2018). See also our response to Q5 for Reviewer ouDH. We will provide this discussion with further elaboration in the revision.
**Q2: "the triple-presentation of the optimization problem is a bit redundant".**
A2: Thanks for this suggestion. We will update the presentation of the optimization problem in the revision to make it concise.
**Q3: "Can the effect of sample size and covariance estimation be discussed?"**
A3: See our response to Q1 in the general response.
**Q4: "Does the resulting noise has an effect on 'admissible' sparsity level".**
A4: We sincerely appreciate this insightful question. In the current theoretical results, the resulting noise does not have an effect on "admissible" sparsity level. That is, as the reviewer nicely noted, the sparsity level $\|\mathbf{A}\|_0$ is not constrained in any way beyond Assumptions 1 and 2. See also our reply to Q1 in the general response.
**Q5: Computational complexity and "is there any simplification over brute force".**
A5: Thanks for raising this excellent question which helps improve the clarity of the method. As the reviewer nicely mentioned, a straightforward and brute-force approach to compute the constraint term $h(\mathbf{A})=\operatorname{tr}\left(\sum_{k=2}^n (\operatorname{off}(\mathbf{A})\odot \operatorname{off}(\mathbf{A}))^k\right)$ is to compute each matrix power in $h(\mathbf{A})$ and then sum their traces up; this approach requires $O(n)$ matrix multiplications. In our implementation, we adopt a more efficient approach with computational complexity of $O(\log n)$ matrix multiplications, inspired by Zhang et al. (2020). The rough idea is to perform exponentiation by squaring (i.e., a general procedure similar to binary search) and recursively compute the term $h(\mathbf{A})$. We will provide a detailed description of this procedure and its computational complexity in the final version.
**Q6: "no evaluation on real data".**
A6: See our response to Q2 in the general response.
**Q7: Typos.**
A7: Thanks for your careful reading. We will carefully proofread the paper to fix the typos in the revised version.
**Q8: "Assumption 3 is a bit unclear at first read" and "only one sense of the 'if and only if' is important".**
A8: Thanks for your constructive suggestion which helps improve the clarity of the assumption. We completely agree with the reviewer that only one sense of the "if and only if" is important. In light of your comment, we will reformulate Assumption 3 as follows:
> **Assumption 3 (Faithfulness).** The resulting covariance matrix $\mathbf{\Sigma}=\mathbf{A}\mathbf{A}^\top$ of mixing matrix $\mathbf{A}$ satisfies a hard constraint $\kappa$ only if $\kappa\in H(\boldsymbol{\xi}_\mathbf{A})$.
**Q9: "What is the exact sense of 'asymptotically' in theorem 7?"**
A9: Thanks for asking this question. By "asymptotically", we intend to mean "in the large sample limit". That is, an infinite number of samples are given so that we have access to the true covariance matrix. We will update the term "asymptotically" to "in the large sample limit" in the revision to avoid possible confusion.
**References:**
D. M. Busiello, S. Suweis, J. Hidalgo, and A. Maritan. Explorability and the origin of network sparsity in living systems. Scientific reports, 7(1):1–8, 2017.
A. Einstein. Does the inertia of a body depend upon its energy-content. Annalen der Physik, 18(13): 639–641, 1905.
Y. Zheng, I. Ng, and K. Zhang. On the identifiability of nonlinear ICA: Sparsity and beyond. In Advances in Neural Information Processing Systems, 2022.
S. Lachapelle, P. R. López, Y. Sharma, K. Everett, R. L. Priol, A. Lacoste, and S. Lacoste-Julien. Disentanglement via mechanism sparsity regularization: A new principle for nonlinear ICA. Conference on Causal Learning and Reasoning, 2022.
P. Spirtes, C. Glymour, and R. Scheines. Causation, Prediction, and Search. MIT press, 2nd edition, 2001.
G. Raskutti and C. Uhler. Learning directed acyclic graph models based on sparsest permutations. Stat, 7(1):e183, 2018.
Z. Zhang, I. Ng, D. Gong, Y. Liu, E. M. Abbasnejad, M. Gong, K. Zhang, and J. Q. Shi. Truncated matrix power iteration for differentiable DAG learning. In Advances in Neural Information Processing Systems, 2022.
---
Rebuttal Comment 1.1:
Title: Response to rebuttal
Comment: I thank the authors for their careful rebuttal, which answers many of my concern. I will keep my score as is, but argue for accept.
---
Reply to Comment 1.1.1:
Title: Thanks
Comment: We are grateful to the reviewer for reading our response and for the acknowledgement. We will incorporate the suggestions in the revision. Many thanks. | Rebuttal 1:
Rebuttal: We thank all of the reviewers for the valuable feedback and time devoted to reviewing our work. We are encouraged that they found our work to be interesting (zk7j, vUDk), developed (zk7j), and useful (6AWD). We are grateful that the reviewers appreciate our theoretical (zk7j, vUDk, ouDH, 6AWD) and methodological (vUDk, ouDH, 6AWD) contributions; specifically, Reviewer ouDH believes that our "assumptions on sparsity are more much more reasonable than in previous works and should allow for future works in this area". Reviewers zk7j, ouDH, and 6AWD also appreciate that the theoretical assumptions/results are presented in a clear way, often illustrated with explanations and examples.
We take the opportunity to clarify below common questions raised by the reviewers. We also provide individual responses to address the comments of each reviewer.
**Q1: Identifiability for finite samples.**
A1: We greatly appreciate this insightful comment. It is worth noting that ICA with Gaussian sources is a challenging ill-posed problem due to the rotational invariance of the Gaussian distribution. Therefore, establishing identifiability results for infinite samples is itself a fundamental problem, and this type of "possibility" identifiability results are needed before one is able to establish guarantees on finite samples.
Given our identifiability result on infinite samples, it will become clear that the problem is solvable under appropriate assumptions. The next step is then to extend this result for finite samples. The empirical studies in Appendix E demonstrate the effectiveness of our method for finite samples, thereby showcasing the potential to extend our identifiability result from infinite to finite samples. We will provide this discussion in the revision.
**Q2: Experiments on real data.**
A2: As acknowledged by Reviewers vUDk and ouDH, the lack of experiments on real data is not necessarily an issue "due to the breadth of theory here". At the same time, in light of the comment, we will incorporate an additional experiment on a real dataset with fMRI data. Due to the time constraint, we are conducting preliminary experiments using the dataset considered by Richard et al. (2021), by adapting the multi-subject data to suit our specific context. We find that the preliminary results align with the observations reported in Section 5. We will include experimental details and final results in the revision.
**References**
H. Richard, P. Ablin, B. Thirion, A. Gramfort, and A. Hyvarinen. Shared independent component analysis for multi-subject neuroimaging. In Advances in Neural Information Processing Systems, 2021. | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
NTKCPL: Active Learning on Top of Self-Supervised Model by Estimating True Coverage | Reject | Summary: This paper proposed a new NTKCPL method to reduce the approximation error and it has a wider effective budget range in the setting of active learning on top of self-supervised model. The experimental results on several Computer Vision datasets (e.g., CIFAR-10, CIFAR-10, SVHN) validate the effectiveness of the proposed methods.
Strengths: 1. This paper is well-written. The problem this paper focuses on is important, and the proposed method is interesting.
2. This paper provides both theoretical and empirical results, which is great for a top machine learning conference like NeurIPS. The experiments are sufficient, and the conclusion is convincing.
Weaknesses: 1. A case study is suggested. For example, in CIFAR-10 dataset, which class or classes are much better than others or all the classes become better with the same scale? Providing detailed case study about the datasets together with improved metrics will make your conclusion further convincing.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. Did authors try this method in NLP datasets? It will be very great if the proposed method can be evaluated in different fields. In NLP, pre-trained model seems to play a more important role than the one in Computer Vision tasks.
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Some potential limitations are suggested to add. For example, a better approximation method may need more computational resources.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks so much for your constructive reviews.
**For weakness 1:**
Thank you for your suggestion. We incorporated your feedback and added a new table to showcase the class-wise accuracy of our active learning strategy on the CIFAR-10 dataset. With the exception of classes 3 and 5 (true labels: cat and dog), which are often confused in the self-supervised feature space, our method generally demonstrates improved accuracy across the other classes. In addition, we will include a case study in the revised version of the paper.
| Sample selection method | # labels | Class0 | Class1 | Class2 | Class3 | Class4 | Class5 | Class6 | Class7 | Class8 | Class9 | Avg. |
| ----------- | ----------- | ----------- | ----------- | ----------- | ----------- | ----------- | ----------- | ----------- | ----------- | ----------- | ----------- | ----------- |
| Random | 20 | 51.03 | 60.10 | 56.53 | 28.90 | 33.73 | 31.03 | 42.70 | 23.27 | 33.73 | 50.63 | 41.17 |
| TypiClust | 20 | 40.63 | 55.03 | 63.67 | 43.83 | 53.30 | 30.70 | 57.83 | 50.57 | 37.47 | 34.80 | 46.78 |
| NTKCPL(self) | 20 | 44.57 | 59.77 | 47.40 | 27.77 | 63.07 | 51.57 | 57.87 | 55.03 | 60.67 | 73.27 | 54.09 |
| Random | 100 | 65.70 | 89.37 | 59.67 | 56.87 | 65.83 | 66.07 | 85.33 | 73.00 | 91.53 | 76.47 | 72.98 |
| TypiClust | 100 | 81.17 | 84.87 | 59.73 | 73.03 | 88.63 | 65.80 | 88.40 | 78.20 | 80.13 | 94.00 | 79.40 |
| NTKCPL(self) | 100 | 81.73 | 92.53 | 69.13 | 50.00 | 82.97 | 78.77 | 85.30 | 87.50 | 91.40 | 92.10 | 81.14 |
**For question 1:**
We appreciate your suggestion regarding the exploration of NLP datasets. In this current work, we focus on computer vision tasks. This decision was made to maintain consistency with previous research on low-budget active learning [R1].
Nevertheless, we find your suggestion important and believe that investigating the performance of our approach on NLP data could be a valuable direction for future research. We will consider NLP datasets in our future work to provide a more comprehensive evaluation of our method's capabilities.
**For limitation 1:**
Thank you for pointing this out. The current approximations primarily stem from the NTK (neural tangent kernel) approximation of DNNs and the utilization of subsets to estimate empirical risk over the entire active learning pool. As the reviewer pointed out, reducing these approximation errors, especially the latter one, requires more computational resources.
[R1] Hacohen, Guy, Avihu Dekel, and Daphna Weinshall. "Active Learning on a Budget: Opposite Strategies Suit High and Low Budgets." International Conference on Machine Learning. PMLR, 2022. | Summary: Active Learning is a crucial problem that focuses on selecting a subset of examples from an unlabeled dataset to be labeled. The primary objective is to ensure that when the model is trained using these selected examples, it achieves a lower empirical risk upon evaluation, assuming all the unlabeled data points are eventually labeled.
While getting labels is a difficult task, current foundation models that utilize self supervision are on-par with many supervised learning procedures. This work aims to use the existing self supervised trained models as the feature extractor, to train a classifier network with the active learning. To get the samples to train the neural network, the paper proposes to use NTK to get the classifier's output, if it was trained on an example say $x^{\prime}$ from the unlabeled pool. Then based on a criterion that depends on the accuracy ( 0-1 loss ), algorithm returns a set to be labeled.
This process is done iteratively. Paper shows improvement against common baselines such as Random/Coreset/BADGE/Entropy/Lookahead and shows that there are improvements. Overall, the problem is well motivated, however, there are several confusions and in particular writing issues, that makes this submission not fit at this stage to be published.
Strengths: - Idea of using NTK on top of self supervised trained network is good, as it also provides a scope for theoretical guarantees.
- Wide spectrum of baseline has been covered, and gains are decent in low budge regime.
- Adaptive strategy for refining clusters is interesting.
Weaknesses: Major weakness of this work is lack of clarity and writing. The notations at many places are not clear to me at all, with interchanging subscripts with "t" time and indexing for label at many places. I will summarize the places where there is clear notational abuse, or inconsistency in the upcoming lines. While the work is interesting, the mathematical notations and algorithm needs to be written very precisely and clearly, which in current form does not seem fit for the publication.
1. Section 2 never formally defines what $f$ is mathematically. What is the input space and what is the output space? Does it output class logits, or features?
2. Wherever an $\operatorname{argmin}$ is written, it should always have the space of optimization
3. Section 3.2 near equation 4, it is written "We denote the predictions of NTK with the dataset $D_{C}$ as $\hat{f}_{DC}$". then does $f$ also outputs a class, or a one hot vector?
4. In summary the proposed algorithm NTKCPL chooses examples to be labeled such that when added to the labeled pool, they'd minimize the empirical risk? Since the backbone is not trained, and they're also using NTK, how different is it in spirit from Mohamadi et al?
5. In NTKCPL Algorithm, I don't understand what exactly is $f_{self}$ and $f_{al}$. It is never defined. Is it the feature extractor and the learned classifier? If yes, then why define a new variable?
6. In NTKCPL Algorithm line 6, $b_i$ is never defined.
7. In NTKCPL Algorithm line 13, subroutine for NTK calling is never defined. Moreover, it seems like an overloading of $\hat{f}$ argument. Lastly, there doesn't seem to be utility of $f_{0}$ other than this routine.
8. Line 20 seem to be performing vacuous setminus from the unlabeled pool, where as in the labeled pool it seems incorrect to have a tuple of input and its pseudolabel as well.
9. Where is $f_t$ being used? There needs to be a full subroutine of AL procedure starting from scratch (that is $L = \emptyset$ going all the way to the required budget, in batches, if needed).
10. $g$ is never mathematically defined in the proposition.
11. I don't understand the meaning of dominant labels. Moreover, the usage of $D_{dom}$ is incorrect if it is indexing over the set of class labels, as previously $D_{.}$ is being used for the dataset.
12. Lots of unnecessary notations introduced such as $ymap$ which can be avoided by appropriately defining $g$
13. What is the meaning of $nff$ subscript, and similarly, $fnf$?
14. Why should clusters based on classifier being trained be reliable? That is usage of $f_{al}$?
15. Experiments did not mention the pretrained data for each of the dataset/arch, neither mention about the MLP arch.
16. What's the reason for Entropy and other popular methods to underperform even at decently high budget such as order of 1000s?
17. How is max number of clusters determined?
18. How different is coverage estimation from accuracy?
Lastly, the work would've been benefited, if there were experiments from CLIP models, which are one of the most popular available pre-trained models.
Technical Quality: 2 fair
Clarity: 1 poor
Questions for Authors: Please refer to the weakness section.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 1 poor
Contribution: 2 fair
Limitations: I think the paper should've had the use-cases where experiments involve pre-trained CLIP ResNet models.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for pointing out the confusing notation, we will ensure that the revised version incorporates these corrections.
**1.** It outputs class logits. In paper, the $f$ denote a neural network model, $f: \mathbb{R}^{d}\rightarrow \mathbb{R}^{k}$, which maps a input sample $x \in \mathbb{R}^{d}$ to $k$-class prediction.
**2.** Thank you for pointing this out. We will fix it in the revised version of paper.
**3.** $\hat{f}$ is the output of the NTK model, which also outputs a class prediction.
**4.** Similar to the work of Mohamadi et al, our approach also falls within the category of "look-ahead" active learning methods based on the NTK. However, a key distinction lies in the sample selection criterion: while they employ an expected maximum change principle for sample selection, our method is rooted in a more fundamental objective – minimizing the expected risk of the entire active learning pool. Our primary technical contribution lies in the construction of CPL to realize this sample selection criterion. Building upon an error analysis of approximating empirical risk on the active learning pool using NTK and CPL, we develop an adaptive refining method for constructing CPL, which splits low-purity clusters and remains high-quality clusters. Additionally, our experimental results demonstrate a clear superiority of our approach over their work.
**5.** $f_{self}$ is the output of the pre-trained self-supervised model's backbone, while $f_{al}$ corresponds to the output of the penultimate layer of the 2-layer MLP classifier (linear-bn-relu-linear) in our paper. Notably, $f_{al}$ denotes the output of the first linear layer within the 2-layer MLP classifier. To avoid any potential confusion between the neural network model and the feature representations, we intend to clarify our notation in the revised version of the paper.
**6.** Thank you for pointing this out. It's a typo. It should be $b$, the budget of active learning.
**7.** In line 13, we compute the output approximated by NTK directly following Equation (13) of the paper. The NTK kernel, $ker$, and the output of the neural network, $f_0$, at the initialization parameters are solely used for the computation of the output approximated by NTK.
**8.** The sample selection is done in line 18, and it is the basic active learning loop: query the true label of selected samples from oracle (line 19), merge them into the existing labeled set (line 20), and train a new model based on the extended labeled set (line 21). In line 19, $ x_{i'_{1,...,b}} $ refers to the $b$ samples to be labeled selected by the active learning algorithm.
$ y_{i'_{1,...,b}} $ is their true label from the oracle
The CPL is write with subscript $cpl$ like $y_{cpl,i}$
**9.** Thank you for pointing this out. It's an overloading definition. The $f_t$ is the classifier training with the labeled set at the $t$ active learning cycle. We will revise the algorithm with the corrected notation.
**10.** We put the revised definition of $g$ in the global response pdf file.
**11.** The term "dominant label" refers to the true class that has the most samples among the certain CPL class. We will fix it in the revised version of the paper.
**12.** Thank you for pointing this out, we will fix it in the revised version.
**13.** As line 177 of the paper, the probability that the NTK prediction agrees with the $y$ but not with $y_{cpl}$ as $P_{fnf}$, and the probability that the NTK prediction does not agree with $y$ but agrees with $y_{cpl}$ as $P_{nff}$. The first $f$ or $nf$ denotes whether NTK prediction agrees with the true label, $y$. And the last $f$ or $nf$ denotes whether NTK prediction agrees with the true label, $y_{cpl}$.
**14.** As mentioned in our response to question 5, $f_{self}$ represents the output of the self-supervised pre-trained model's backbone, while $f_{al}$ corresponds to the output of the hidden layer of the 2-layer MLP classifier, where the classifier is trained on top of the frozen self-supervised pre-trained backbone.
As discussed in the paper [R1], training an MLP classifier on the frozen pre-trained backbone often yields better performance compared to using a Linear classifier. This observation suggests that the MLP classifier learns features more suitable for classification. In other words, after being fine-tuned with some labeled data, the active learning feature $f_{al}$ may be more tailored for classification tasks than $f_{self}$. Consequently, clustering on the $f_{al}$ feature space is likely to result in higher-quality pseudo-labels, thereby enhancing the effectiveness of our active learning method.
**15.** The Oxford-IIIT Pets and ImageNet-100 experiments employed a ResNet-50 model pre-trained on the ImageNet dataset. As for CIFAR-10, CIFAR-100, and SVHN, we performed pre-training on their respective datasets using the following architectures: ResNet-18 for CIFAR-10, WRN-28-8 for CIFAR-100, and ResNet-18 for SVHN. The architecture of the pre-trained models was written in the Implementation section of the paper.
Regarding the architecture of the MLP classifier, it was written in Appendix Section 3, Table 1. The architecture consists of a 2-layer MLP with the following structure: Linear + BatchNorm + ReLU + Linear. The output dimension of the first linear layer is specified in Appendix table 1 too.
**16.** The likely reason is the training method. In our study, we freeze the backbone and train an MLP classifier, which demonstrates superior performance within the low-budget regime. We add some extra experiments of fine-tuning the entire network. The results are shown in table 10 of the global response pdf file.
**CLIP model**. Thank you for pointing this out. We will consider it in future work.
[R1] Ren, Yi, et al. "How to prepare your task head for finetuning." The Eleventh International Conference on Learning Representations. 2022.
---
Rebuttal Comment 1.1:
Comment: **17.** It's a hyperparameter. We roughly set the maximum number of clusters to be around 3-5 times the number of classes in the dataset. For datasets with a larger number of samples per class (such as CIFAR-10 and SVHN), we increase the maximum number of clusters to around 10 times the number of classes in the dataset.
**18.** Coverage typically refers to a region identified by the active learning algorithm, within which the samples often exhibit higher accuracy. It does not align perfectly with the true accuracy of the active learning pool. We argue this discrepancy is a limitation in previous active learning methods, which is why we propose estimating accuracy on the active learning pool based on NTK and CPL. This approach aims to align coverage with the actual accuracy of the active learning pool, enhancing performance. | Summary: The paper presents a look-ahead strategy for more efficient active learning when used with self-supervised learning features. The approach uses Neural tangent kernels and pseudo-labels generated by clustering self-supervised or active learning features to estimate an approximation of the empirical risk of each unlabeled data sample. It then selects those examples for label annotation that will maximally reduce this empirical risk in each iteration. The paper demonstrates the validity and performance of this approach on 5 image datasets, showing that the approach outperforms other baseline active learning methods in most cases, and remains dominant over a larger range of training budgets over earlier SOTA methods for low and high budget strategies.
Strengths: Originality: To my knowledge this is the first look ahead strategy for active learning that combines a neural tangent kernel with clustering based pseudo-labels to estimate an approximation of empirical risk of each unlabeled data sample to select for active learning.
Quality: The paper is of average quality. It is incremental, building on earlier concepts of NTK and applying it to active learning with self-supervised features. It does a good job motivating, and validating the approach. The presentation in section 3.3 with a lot of new terms and notations is tiring and could be simplified.
Clarity: The paper could be improved with a few more editing passes to complete some incomplete sentences and polish the grammar for better readability.
Significance: The work is significant since it shows an approach that is at or improves on SOTA for active learning with self-supervised features.
Weaknesses: I see the following weaknesses:
1. Section 3.3 is currently dense with a lot of new terms and notations that are not motivated or explained well. I suggest the authors refine this section, explaining more how the arguments lead to their proposition. The appendix is not very helpful as it currently stands to understand this proposition well.
2. A major dimension missing in the paper is how the time taken for active learning with NTKCPL compares with other SOTA methods in the different budget regimes. I suspect that NTKCPL is faster, but it is not clear if it indeed is faster, and if so, by how much. Since there is also a clustering algorithm run for each iteration, it is not clear how the overall approach scales with size and dimensionality of data, number of classes etc.
3. In Algorithm 1, I think line 6 should read “min(b_0/2,”. It now reads “min(b_i/2,”
4. Figure 34 2 compares various methods with NTKCPL. However, different panels use different colors for the same method, making it hard to read the (already dense) plots. Please use consistent colors/line types for the same method in each plot
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors:
1. From the weaknesses above, how does NTKCPL scale with size and dimension of data, number of classes etc., and how do the running times compare with SOTA methods over different budget regimes?
2. Section 4- implementation: The paper mentions 2 sets of models used for each of the 5 datasets. What do the 2 sets represent - is one used for high budget and the other low budget?
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: I don’t see any significant negative societal impact of this work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your constructive reviews, which will help us to improve the quality of the paper.
**Weakness 1**
We appreciate the reviewer's feedback regarding Section 3.3. We will carefully revise it to provide a more comprehensive and intuitive explanation of the arguments that lead to our proposition in the revised version.
**Weakness 2**
In general, as our approach falls under the category of "Look-ahead" active learning methods, the computational cost is generally higher compared to "myopic" type methods like entropy. However, when compared to other "Look-ahead" active learning methods, our approach demonstrates similar time complexity. The analysis of the time complexity is shown in the global response. And the practical running time of our method and one of the baseline methods, LookAhead, is shown in Appendix section 4.
**Weakness 3**
Thank you for pointing out the issue of typo. It's min($b/2$, where the $b$ is the budget of each active learning cycle. We will fix them in the revised version.
**Weakness 4**
Thank you for pointing out the issue of inconsistent figure color. We will fix them in the revised version.
**Question 1**
The analysis of the time complexity is shown in the global response.
**Question 2**
For the two different pretraining methods used in our study, we employed the BYOL for the larger dataset, ImageNet-100. BYOL demonstrated superior pretraining performance compared to SimSiam on the large dataset. However, the BYOL paper did not provide pretraining results for small-scale datasets, such as CIFAR-10. So, we choose Simsiam to pre-train on these small-scale datasets. Simsiam reported good pertaining performance on the small dataset.
Regarding the selection of different network architectures for each dataset, we followed established practices from previous research. For datasets with lower image resolutions, such as CIFAR-10 and CIFAR-100, we utilized the ResNet18 or WRN288 architecture. For datasets with higher image resolutions and greater complexity, like ImageNet-100, we adopted the ResNet50 architecture.
---
Rebuttal Comment 1.1:
Comment: Thanks for addressing my earlier review comments.
I will keep my earlier rating recommendation | Summary: The paper proposes a active learning strategy that combines self-supervised learning with NTK approximation to estimate empirical risk more accurately. The proposed method outperforms state-of-the-art methods and has a wider effective budget range.
Strengths: Well-written: The paper is well-written, informative, and easy to understand. The authors provide clear explanations of the proposed method and the analysis, making it accessible to a wide audience.
Comprehensive analysis: The paper presents a comprehensive analysis of the proposed method, including an ablation study and experiments on various datasets.
Experimental results: The paper presents experimental results that demonstrate the effectiveness of the proposed method on various datasets. The results show that the proposed method outperforms state-of-the-art methods in most cases and has a wider effective budget range.
Weaknesses: * Table 2 is misssing
* More self-supervised learning method + active learning should be compared
* Novelty seems not strong enough for NIPS, as the author mentions, self-supervised + active learning has been worked before, and would you explain why the previous methods are not competitive as this
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: * “in our scenario, training on top of the self-supervised model, NTK does not approximate predictions of the whole network well. The main
reason is that weights of the neural network are initialized by self-supervised learning rather than
NTK initialization, i.e., drawn i.i.d. from a standard Gaussian” ,I still confused about the reason, would you give me more detail?
* For Figure 1, Is it a concept graph or an experimental result graph?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The proposed method requires a pre-trained self-supervised model, which may not be available or feasible to obtain in some scenarios. This limits the applicability of the proposed method to certain domains and datasets.
Additionally, the paper does not provide a detailed analysis of the computational complexity of the proposed method, which may be a concern in some scenarios where computational resources are limited.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks so much for your constructive reviews.
**1. For weakness (1): table 2 is missing**:
Table 2 was included in line 273 of the paper.
**2. For weakness (2): More baseline**:
Thank you for your comment. We have reported the TypiClust that is tailored for this specific setting in the paper. To include more baseline, we add a new method, probcover, which is designed specifically for the low-budget regime. The results of the new baseline are shown in Tables 1, 2, 3 of the global response PDF file.
**3. For weakness (3): Novelty seems not strong enough for NIPS, as the author mentions, self-supervised + active learning has been worked before, and would you explain why the previous methods are not competitive as this**:
In terms of novelty, our method combines NTK and CPL to estimate the empirical risk of the active learning pool. And the approximation error is analyzed. So, unlike many prior active learning methods that rely on heuristic criteria, our method offers a certain degree of theoretical justification (as pointed out by reviewer YYhz).
Moreover, the main drawback of existing AL+SSL methods is the narrow effective budget range. Many of these methods have been assessed in scenarios where the annotation budget is extremely limited (i.e., labeled samples are fewer than 6 times the number of dataset classes). In that case, the performance of the models is often not good enough. However, as the number of annotated samples gradually increased, we noted a diminishing performance of existing low-budget active learning strategies. In some instances, these strategies show lower performance than the random baseline. This situation often leads us to encounter a challenge when employing existing low-budget active learning strategies. These observations motivate us to propose a novel approach with a wider effective budget range within the low-budget regime.
In Section 2, we visualize our insight, where we identify a key issue in existing methods rooted in feature distance-based coverage estimation. This can lead to either overestimation or underestimation, resulting in suboptimal sample selection and limiting the effective budget range. To address this, we propose using the NTK and CPL for estimating the empirical risk on the active learning pool.
**4. For question (1): The reason why NTK does not approximate a NN initialized by pre-trained weights very well**:
Existing methods approximate the neural network model output using the NTK by performing a first-order Taylor expansion around the network's initial values [R1] (sec. 2.2). NTK theory posits that as the width of a neural network tends to infinity, the changes in each weight during training approach zero, making the Taylor expansion around the initial values sufficiently accurate. However, in practical scenarios, the practical neural networks have finite widths, and the changes in weights during training cannot be regarded as zero. Thus the choice of the Taylor expansion point impacts the final approximation error.
In our training setting, we initialize the neural network with weights from self-supervised pretraining. After fine-tuning the entire network with labeled data, the weights of the network are significantly different from the random initialization weights used by NTK. As a result, NTK cannot provide a satisfactory approximation of the output of neural networks trained in this manner.
**5. For question (2): For Figure 1, Is it a concept graph or an experimental result graph?**:
In fig.1, all unlabeled samples are visualized using t-SNE based on CIFAR-10 self-supervised features. The labeled samples consist of 50 samples selected by probcover [R2]. In fig.1(a), the radius of blue circles is computed using the coreset approach. In fig.1(b), the coverage radius is calculated using the probcover method. In fig.1(c), the black dots represent samples deemed covered by our method, i.e., samples for which NTK predictions align with CPL. In fig.1(d), the black dots represent true covered samples that are predicted consistently with the true labels by a classifier trained using this set of labeled samples.
**6. For limitation (1): requires a pre-trained self-supervised model**:
To our knowledge, a major constraint on obtaining self-supervised models lies in the requirement for a sufficient amount of unlabeled data to train them. However, there are scenarios where obtaining an adequate volume of data may be challenging. In response, our experiments encompass the utilization of self-supervised models pre-trained on ImageNet for the Oxford-Pets dataset, yielding promising results as illustrated in Fig. 2(d) of the paper.
Moreover, while our experiments are conducted based on self-supervised models, our proposed method is not confined solely to self-supervised models. It is applicable in contexts where well-pretrained models are available. Given the extensive research and rapid advancement of foundational models, we believe that the constraints associated with obtaining a high-quality pre-trained model are progressively diminishing.
**7. For limitation (2): computational complexity**:
Thank you for pointing this out, we analyze it in the global response.
[R1] Lee, Jaehoon, et al. "Wide neural networks of any depth evolve as linear models under gradient descent." Advances in neural information processing systems 32 (2019).
[R2] Yehuda, Ofer, et al. "Active learning through a covering lens." Advances in Neural Information Processing Systems 35 (2022): 22354-22367.
---
Rebuttal Comment 1.1:
Title: Reply to authors
Comment: Thanks for your response. I read the paper once agin and all the review comments and replies. I agree with Reviewer YYhz that the mathematical notations should be made clearer. Have you submitted a new version?
---
Reply to Comment 1.1.1:
Title: Notations
Comment: Thank you for your response. The revised notations are shown in the global response. We will incorporate them into a revised version of the paper. | Rebuttal 1:
Rebuttal: Thanks to all the reviewers for their valuable feedback. The computational complexity of our algorithm is a common issue for several reviewers, so we put the computational complexity analysis in the global response.
The computational complexity of our algorithm can be broken down into two main components. The first component is the generation of CPL as described in Algorithm 2 of the paper and the second component involves the utilization of NTK approximations. Let's denote the size of the labeled dataset as $L$, the size of the unlabeled dataset as $U$, the budget for selecting labeled samples per active learning round as $b$, and the number of parameters in the model as $P$ (in our case, the parameters of the MLP classifier used in NTK computation).
Referring to [R1], we can write that the complexity of computing the NTK kernel is $O(LUP + L^2P + L^3)$. The time complexity for selecting single samples from $U$ using the NTK is $O(UL^2)$. Consequently, the time complexity for computing the NTK kernel and selecting $b$ samples is approximately $O(b(LUP + L^2P + L^3 + UL^2))$.
Regarding the CPL computation, the primary computational complexity arises from the execution of k-means clustering $(C_{max} - C_0)$ times, where we split the most impure cluster into two, resulting in $k = 2$. Here, $C_{max}$ represents the maximum number of clusters, $C_0$ is the initial number of clusters, the maximum number of iterations for k-means is denoted as $I$, and the feature dimension used for clustering is represented by $d$. As shown in Algorithm 1, each active learning round involves generating CPL once and the selection of $b$ samples, resulting in an overall complexity of $O(b(LUP + L^2P + L^3 + UL^2) + UdI)$. In our scenario, the dominant term is $O(bLUP)$, which is similar to the LookAhead framework [R1], the practical running time is shown in appendix section 4.
[R1] Mohamadi, Mohamad Amin, Wonho Bae, and Danica J. Sutherland. "Making look-ahead active learning strategies feasible with neural tangent kernels." Advances in Neural Information Processing Systems 35 (2022): 12542-12553.
Pdf: /pdf/8f8dd15dd323a9f5e2337661e3609ccb46f34cd1.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: This paper introduces a novel approach that combines active learning with self-supervised learning, known as neural tangent kernel clustering-pseudo-labels (NTKCPL). The method leverages the power of the neural tangent kernel (NTK) in conjunction with self-supervised learning features to enhance the estimation of lookahead. Additionally, clustering-pseudo-labels are employed to estimate the classification error. The paper includes a thorough analysis of the approximation and presents comprehensive experimental results, comparing the proposed methods against benchmark techniques.
Strengths: As demonstrated in the comparison experiments, NTKCPL exhibits substantial performance gains.
The analysis of CPL error is important, as it effectively reveals that errors arise from over-clustering and impurity. Building upon this analysis, the proposed cluster generation algorithm effectively addresses these issues, displaying a logically concrete solution.
Weaknesses: The paper is motivated from the concepts of "phase transition" and "effective budget range" in active learning. However, it lacks analysis regarding why the proposed method can increase the effective budget range. Additionally, in Table 2, the absolute percentage of the "Effective Budget Ratio" is dependent on the chosen total annotation quantity, e.g. a larger total annotation quantity leads to a smaller "Effective Budget Ratio," suggesting that the "Effective Budget Ratio" is not well-defined.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: Eq. 6 may include a typo: Shouldn't the superscript "k" be a subscript "k"? And j is not shown in the right hand side of the equation.
What's the difference between $\hat{f}_{y}$ and $\hat{f}_{ymap}$
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 4 excellent
Limitations: The authors have addressed the limitations of this paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your valuable reviews.
**1. Weakness (1) “lacks analysis regarding why the proposed method can increase the effective budget range”:**
In Section 2, we visualize our insight, where we identify a key issue in existing methods rooted in feature distance-based coverage estimation. This can lead to either overestimation or underestimation, resulting in suboptimal sample selection and limiting the effective budget range. To address this, we propose using the NTK and CPL for estimating the empirical risk on the active learning pool. Our results in Fig. 4 showcase that our method's estimated empirical risk closely approximates the true empirical risk. This alignment likely contributes to the extension of the effective budget range facilitated by our approach.
Moreover, further comprehensive investigations remain a fascinating direction for designing robust and safe active learning strategies.
**2. Weakness (2) “Effective Budget Ratio" is not well-defined”:**
Indeed, the current definition of effective budget ratio may not be an ideal metric, as it can be influenced by the total number of annotations. Exploring better ways to quantify the effective annotation range for active learning strategies could be a valuable direction for future research. Nevertheless, as a qualitative indicator, this metric can still serve as a valuable tool. Regardless of the total annotation count, if an active learning strategy exhibits a notably low effective budget ratio, it suggests that the strategy might not reliably yield positive outcomes. Hence, the reported results in this paper still demonstrate the advantages of our proposed method over typical existing active learning strategies.
Furthermore, the effective budget ratio reported in the paper serves as a succinct summary of all experiments. The detailed effective budget scope can be found in Fig. 3.
**3. Question eq.6**
Thank you for pointing this out. We clarify the definition of eq.6 in the global response pdf file.
**4. Difference between $\hat{f}{y}$ and $\hat{f}{ymap}$**
The $\hat{f}{y}$ is the output of NTK trained with the true label and $\hat{f}{ymap}$ is the output of the label mapping function $g$, which maps the output of the NTK trained with the CPL label into the true label class. | Summary: This paper aim to develop an active learning method effective across various budgets and compatible with self-supervised learning. The proposed approach, NTKCPL, a look-ahead active learning strategy, selects a subset that are expected to train network to minimize error of unlabeled data pool. For efficiently estimate the model prediction trained with candidate set, they employ NTK. To do so, they freeze the network's backbone and train only the classifier. Pseudo labels are assigned to the unlabeled data pool for empirical risk calculation, achieved by applying a constrained K-means algorithm to self-supervised features. The loss between pseudo labels and approximated predictions is calculated to select data that will likely helpful to minimize the unlabeled pool's loss. The proposed method is evaluated on five datasets and outperformed the baselines on most of the datasets and budget ranges.
Strengths: - Combining NTK and pseudo label from clustering to estimate empirical risk is new to me.
- The proposed method is evaluated on various datasets, and in most cases, it demonstrated superior performance compared to the baseline approaches.
- They analyze the approximation error of the empirical risk when using NTK and CPL
Weaknesses: - Limited technical contribution
- An essential element of the proposed method stems from earlier work [26] that utilizes NTK approximation of DNN prediction for look-ahead active learning.
- The most prominent distinction from [26] is that this work utilize expected error reduction instead of expected model output change for active selection, and proposed a method for assigning pseudo labels to facilitate this.
- Given that both expected error reduction and pseudo labeling through feature vector clustering are widely used techniques, the technical contribution of the proposed method could be seen as incremental.
- Concerns about practicality
- The method needs to freeze the backbone to ensure the accuracy of the NTK approximation (line 151 - 154). However, according to the existing self-supervised learning literature [a], there is a substantial performance gap between fine-tuning the entire network and those that only train the classifier.
- Moreover, the proposed method appears to be dependent on the quality of the features learned through self-supervised learning. Although it is claimed that the active learning feature is used to improve clustering purity, the results in table 1 reveals a performance advantage for self-supervised features in low-budget situations.
- Need for comprehensive baseline comparisons
- It would be beneficial if the performance of [24] was also reported, given that the proposed method follows the training configuration of [24].
- It appears that the results using self-supervised features on Cifar 100 may be missing. It would be advantageous to see these results.
- Furthermore, I am curious as to whether the baseline methods were also trained with the frozen backbone, and how their performance would vary depending on this factor.
- I encountered difficulties in smoothly following the provided script. I believe there is room for improvement in the writing.
- I found the paper's main point to be somewhat confusing. On lines 49 - 52, one of the stated goals of the proposed method is an active learning strategy with a wider effective budget range, yet on line 226, there is a focus on the low-budget regime.
- Furthermore, the notations in the method section have not been clearly defined, and they are somehow confusing. For instance, in the script, 'f' denotes a neural network (line 125), 'f_0' represents the network's output (line 127), 'f_self' and 'f_al' are features (line 1 of algorithm 1), 'f_t' signifies the classifier (line 21 of algorithm 1), and '\hat(f)_cpl' is the prediction (line 170).
[a] He, Kaiming, et al. "Masked autoencoders are scalable vision learners." *Proceedings of the IEEE/CVF conference on computer vision and pattern recognition*. 2022.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: - I am interested in understanding the performance degradation when not using a frozen backbone.
- I am curious about the upper bound of performance achieved through empirical risk estimation using true labels and NTK.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: The paper appropriately states its limitations and broader impacts.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your valuable reviews, which helps us to improve the paper.
**1. Weaknesses. technical contribution**
Yes, our approach is built on the LookAhead framework. However, the key innovation is that we use the CPL to estimate the empirical risk and design a CPL construction method based on the analysis of the approximation error in estimating the empirical risk using NTK and CPL. Table 1 and Figure 2 in the paper show that our method significantly outperforms LookAhead.
While it is true that expected error reduction (EER) is an established concept, its application within the context of deep active learning remains relatively underexplored. To the best of our knowledge, we have only identified one previous work [R1] that incorporates EER in deep active learning. Furthermore, it's important to note that [R1] requires an extra validation set to estimate expected error, which may not be practical in the low-budget regime. In contrast, our proposed method leverages CPL to estimate expected error.
Additionally, regarding technical implementation, our CPL is different from a simple execution of clustering in the feature space. We carefully analyze the approximation error and design an adaptive refining method for CPL. This approach reduces the impact of these approximation errors and enhances the accuracy of our method.
**2. Weaknesses. Concerns about practicality (a) training method**
Fine-tuning the entire network yields significantly better performance when using a large amount of annotated data. However, we would like to emphasize that our main focus in this paper is on the low-budget regime with a wider effective budget range explored in the previous paper [R2]. Within this range of annotation quantities, training only the classifier has demonstrated either superior or comparable performance compared to fine-tuning the entire network. The results of two training methods on CIFAR-10, CIFAR-100 and ImageNet-100 are shown in table 7, 8, 9 of the global response PDF file.
**3. Weaknesses. Concerns about practicality (b) dependent on the quality of the self-sup. features**
In the context of the five datasets used for validation, CIFAR-10 stands out as the only example where NTKCPL(self), our method based on self-supervised feature, significantly outperforms NTKCPL(al), our method based on active learning feature, in low-budget situations. We believe this difference in performance is attributed to the width of the active learning features (output of hidden layer of the 2-layer MLP classifier). In our CIFAR-10 experiments, we followed the approach of previous works, employing a 2-layer MLP classifier with a hidden layer width of 64, i.e. active learning feature dimension is 64, which is much smaller than the 512-dimensional self-supervised features.
We conducted additional experiments on CIFAR-10 by adjusting the MLP hidden layer width to 512. After the number of annotations is greater than 100, the accuracy difference between NTKCPL(self) and NTKCPL(al) is less than about 0.5\%. (Due to space limitations, we will post the results during discussion)
**4. Weaknesses. Comprehensive baseline (a)**
When attempting to reproduce the method of [24], we encountered similar difficulties as described in [R2] (Appendix. D.4). Due to the unavailability of source code, reproducing the method reliably became challenging. To include more baseline methods, the results of probcover [R2] are shown in table 1, 2, 3 of the global response PDF file. As mentioned in [R2], its performance surpasses that of W-dist[24].
**5. Weaknesses. Comprehensive baseline (b)**
Thank you for pointing this out. The result is shown in table 4 of the global response PDF file. We will include it in the revised version of paper.
**6. Weaknesses. Comprehensive baseline (c) and Question 1**
Yes, the baseline methods were implemented with the frozen backbone too. We observed that baseline performance of training only the classifier outperforms that of fine-tuning the entire network in the low-budget regime (about $20-50$ times the number of label classes in the dataset).
We add the baseline performance with fine-tuning shown in table 10 of the global response PDF file. Because for the ImageNet-100 dataset, training only the classifier consistently achieves higher accuracy compared to fine-tuning for random baseline within our annotation quantities range. We add experiments on CIFAR-10. When the fine-tuning is adopted, the interval where our method outperforms other baseline methods is roughly consistent with the interval where training only the classifier surpasses fine-tuning. Beyond this range, our method shows diminished performance compared to high-budget active learning baselines.
**7. Weaknesses. Room for improvement in the writing (a)**
In our work, we focus on the low-budget regime, and the mentioned "wider effective budget range" refers to the broader scope within this low-budget regime. In previous papers [R2], the typical low-budget regime usually refers that the total number of labels is about 6 times the number of classes in the dataset. However, the model's performance is often not satisfactory within this range. When increasing the number of labeled samples, existing active learning strategies do not reliably outperform the random baseline. Therefore, it became necessary to propose a new method that could achieve a wider effective budget range within the low-budget regime.
**8. Weaknesses. Room for improvement of writing (b)**
Thank you for pointing out the issue of confusing notations. We will ensure that the revised version incorporates these corrections.
**9. Question 2. Upper bound of performance**
We have reported the accuracy achieved by directly substituting the CPL in our method with true labels in table 5, 6 of the global response PDF file. As the number of annotated samples increases, we observe that our method's performance approaching that of the utilization of true labels.
---
Rebuttal Comment 1.1:
Comment: **9. Question 2. Upper bound of performance**
Furthermore, it is important to note that even with the direct substitution of CPL with true labels in our method, we have not yet reached the performance upper bound of active learning using NTK to estimate empirical risk. This discrepancy can be attributed to two main factors. Firstly, we use a subset of the active learning pool to estimate empirical risk. Secondly, our current sample selection strategy still follows a greedy approach. Thus, we believe the potential for substantial performance gains if improvements are made in these two aspects for future work.
[R1] Mussmann, Stephen, et al. "Active Learning with Expected Error Reduction." arXiv preprint arXiv:2211.09283 (2022).
[R2] Yehuda, Ofer, et al. "Active learning through a covering lens." Advances in Neural Information Processing Systems 35 (2022): 22354-22367.
**Table for 3. Weaknesses. Concerns about practicality (b)**
**Table Accuracy of NTKCPL(self) and NTKCPL(al) on CIFAR-10 when the classifier is the 2-layer MLP with hidden layer width 512. All results are averages over 3 runs.**
| #Label | 20 | 40 | 60 | 80 | 100 | 200 | 300 | 400 | 500 | 1000 | 1500 | 2000 |
| ----------- | ----------- | ----------- | ----------- | ----------- | ----------- | ----------- | ----------- | ----------- | ----------- | ----------- | ----------- | ----------- |
| NTKCPL(self) | 48.84$\pm$2.99 | 66.23$\pm$1.54 | 74.04$\pm$1.20 | 78.97$\pm$1.23 | 79.40$\pm$0.67 | 83.14$\pm$0.52 | 84.62$\pm$0.33 | 84.76$\pm$0.22 | 85.10$\pm$0.72 | 87.09$\pm$0.32 | 87.62$\pm$0.24 | 87.89$\pm$0.06 |
| NTKCPL(al) | 49.67$\pm$2.93 | 64.24$\pm$2.90 | 72.10$\pm$2.38 | 76.67$\pm$0.64 | 78.78$\pm$0.92 | 82.66$\pm$0.81 | 84.13$\pm$0.44 | 84.67$\pm$0.23 | 85.53$\pm$0.20 | 87.32$\pm$0.33 | 87.75$\pm$0.19 | 88.13$\pm$0.32 | | null | null | null | null |
Limits, approximation and size transferability for GNNs on sparse graphs via graphops | Accept (poster) | Summary: This paper analyzes the transferability and approximation qualities of Graph Neural Networks (GNNs) when applied to graphs or Graph Signal Operators (GSOs) sampled from graph limit objects, known as graphops. The authors demonstrate that when a sequence of graphs is sampled from a graphop, the sequence converges according to a well-known metric. The paper also introduces the concept of graphop neural networks, which are utilized as a tool for analysis. It is shown that when a GNN is applied to a sequence of graphs sampled from a graphop, the output converges to the corresponding output from the graphop neural network applied to the graphop itself.
Strengths: - Presents a novel integration of known techniques, specifically graphops and Graph Neural Networks (GNNs). The authors notably generalized the results for graphs converging to a graphon, which has been proven in [1,2,3].
- This research could potentially pave the way for further exploration into the transferability of GNNs in relation to sparse graphs. However, the authors miss to mention some work from Keriven et al. and Ruiz et al. which already consider non-dense graphs.
Weaknesses: - The quality of writing and presenting is poor leading to bad comprehensibility. None of the theorems are self-contained. While the proofs seem correct, it is difficult to follow them due to notational issues and gaps in the argumentation (see below for a non-complete list). Note that I did not check all proofs thoroughly.
- The authors make some, at least, weird assumptions such as graphops being self-adjoint, but allowing them to be non-linear. The set of such operators is empty. Also, the choices of the convergence notion are not clear, even though a subsection is dedicated to it.
- Some assumptions appear restrictive, e.g.: The graphops are assumed to be sampled from $[0,1]$, in contrast to many other transferability papers (e.g. [2-4]). Assumption 3.A also appears restrictive.
- The paper lacks clear organization and writing, with numerous notational errors scattered throughout. The order of presenting assumptions before other information also appears confusing. The appendix also contains many, at least, notational issues.
- The practical and theoretical implications of the results remain unclear. For instance, it would be beneficial to understand:
a) The potential impact of this work on theorists and practitioners, b) Examples of graphs for which the results are applicable, c) How can the convergence stated in Theorem 2 and 3 be experimentally demonstrated?, and d) The restrictiveness of the assumptions made in the study.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: - Could you clarify how your convergence definition aligns with the one proposed by Backhaus and Szegedy? In particular, how can we assure that convergence results still hold if we consider a sequence of Lipschitz constant $(C_{v_n})_n$ that grows sublinearly in n? Why does $d_M$ from Backhaus and Szegedy also converge to zero in this setting?
If this is sufficient, why do the authors change the notion of convergence?
What motivates the chosen notion of convergence? If not for mathematical convenience, what other reasons justify this choice?
- What implications do these results have for practitioners? Maybe the authors can provide some (simple) experiments, where Theorem 2 and 3 are verified? This would help for clarification and realizing an example (other than dense graph convergence which was already covered in previous works)
- What implications may these results have for developing new theories? For example, do you expect to achieve generalization bounds for sparse graphs such as in [4]?
- On line 73, could you explain why nonlinear operators are significant? My understanding is that all Graph Signal Operators (GSOs) are linear operators.
- You sample nodes deterministically, can you compare the loss of generality with works that do not assume this? E.g., [2,3,4]
- You mention that all graphops are self-adjoint. In light of this, why are nonlinear operators brought up? Note that every self-adjoint operator is linear.
- Regarding Theorem 4, could you clarify its utility? Also, the proof seems peculiar and certain terms such as $A$ and $A_n$ lack clear definitions.
For example, why does $S_{k,L(n)} (A_n) \subset S_k(A)$ hold? These profiles consist of different distributions, right? More details are provided below.
### Minor Issues and Questions:
- Line 12: Could you clarify why graphs sampled from the same graphop share structural properties? Especially, for graphops that are not induced by graphons.
- Line 24: Please consider adding [2,3].
- Line 43: Please refer to Keriven's work on "relatively sparse" graphs in the context of random graph models.
- Line 53: Do you mean "limits of GNNs of infinite sequences of graphs"?
- Line 58: [3] generalized the results to unbounded graphon operators and more diverse GNNs.
- Line 89: Ruiz's work also indicates a $\sqrt(n)$ dependence which decays to zero, this statement seems incorrect. Also, there are other works by Ruiz considering sparse graphs.
- Line 100: Random Graph Models (RGMs) essentially consist of graphons/kernels which are graph limits, and they are not opposed to these.
- Line 102: Could you define "closeness"?
- Line 119: Please mention that the entire work considers only spectral (polynomial) GNNs.
- Line 130: The space also contains limits of sparse graphs.
- Line 185: What does "positiveness" mean?
- Line 235: Do the results generalize to higher dimensional features?
- Line 246: What is $F_m$?
- Line 310: How do you define d_M for these non-linear operators that can neither be self-adjoint nor positive?
- Line 334: What do you mean with "wild"?
- Line 335: Could you elaborate on what "specifically designed for our discretization scheme" means?
- Line 341: Why "high probability"?
- Line 379: Could you clarify what this sentence means: "Then, A_n converges to the same limit as action convergence"?
#### Proof of Theorem 2:
- In bounding d_H, why is the maximum over certain quantities not considered?
- Line 731: Could you define the conditions for epsilon in the phrase "for some epsilon"?
There seem to be numerous typos throughout, such as in equation 56 and line 727, $S_k,C_v$.
- The definition of $U_\varepsilon$ is difficult to locate and appears incorrect.
Equality 60 should be an inequality.
- Line 763: What does $\bar{\mathcal{E}}$ denote?
#### Proof of Theorem 4:
- Line 1007: Could you clarify what it means for $f''$ to be Lipschitz in $l^2(n)$?
- Line 1011: What does the notation $\lim_\bar{S_{k,L(n)}(A_n)}$ signify?
[1] Ruiz, L., Chamon, L., & Ribeiro, A. (2020). Graphon neural networks and the transferability of graph neural networks. Advances in Neural Information Processing Systems, 33, 1702-1712.
[2] Keriven, N., Bietti, A., & Vaiter, S. (2020). Convergence and stability of graph convolutional networks on large random graphs. Advances in Neural Information Processing Systems, 33, 21512-21523.
[3] Maskey, S., Levie, R., & Kutyniok, G. (2023). Transferability of graph neural networks: an extended graphon approach. Applied and Computational Harmonic Analysis, 63, 48-83.
[4] Maskey, S., Levie, R., Lee, Y., & Kutyniok, G. (2022). Generalization analysis of message passing neural networks on large random graphs. In Advances in Neural Information Processing Systems.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: The authors do not address limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the feedback.
We name the 5 points raised in the Weakness section W1-W5, points raised in Question Q1-Q7, and points raised in Minor Issues M1-M24. We write B-S to abbreviate Backhaus and Szegedy’s original paper.
1. For comments regarding the assumptions (W3), future implications (W5, Q2, Q3) and M1, M7, see the reply to all reviewers.
2. Typos, missing references, phrasing and notational errors (W1, W4, Q7, M2-7, M9, M17-18, M20-22, M24): We will correct them and highlight other elusive notations in the revision of the paper. We will also work on improving the clarity of the paper.
3. Self-adjoint and nonlinear operators (W2, Q4, Q6, M14): Indeed there are no self-adjoint (strictly)-nonlinear operators. However, we defined ‘nonlinear P-operators’ as those that are ‘not necessarily linear’ (line 180), including linear operators. This is important since a GNN layer itself (as opposed to just the GSO, which is linear) is a nonlinear operator and we would like to include it in the definition of P-operators. In fact, the whole (equivariant) GNN is also a nonlinear operator. This allows us to write Theorem 3, which uses $d_M$ - a metric originally derived to compare P-operators, to compare 2 GNNs. We will highlight this in the paper. Moreover, we only used the self-adjoint property of graphops to ensure that discretizing a graphop indeed gives a finite graph (Lemma 1). In Theorem 2 and 3, we no longer assume self-adjoint properties so at no point are there operators that are both (strictly)-nonlinear and self-adjoint.
4. Restrictive assumptions (W3): While the domain of the signal is [0,1], we can extend our results to any bounded domain, but currently we do not know of a proof for unbounded signal domains.
5. Comparison of the modified convergence and B-S original convergence (Q1, Q4): The proof of Theorem 3 with a slow-growing function of n mirrors the current proof verbatim. To gain intuition on the equivalence of the two convergence, note that the only difference between the two convergences is our use of $(k, C_v(n))$-profiles, which consider all bounded, $C_v(n)$-Lipschitz signals, instead of just $k$-profiles, which consider all bounded measurable signals. By letting $C_v(n)$ grow to infinity, we approximate all Lipschitz functions, which are dense in $L^2$ functions. The benefit of recovering B-S’s notion of convergence is that we inherit its useful properties, such as completeness of $d_M$ for linear operators. This modification allows us to use Lipschitzness of signals in our proofs of Theorem 2 and 3. This is necessary, following the insight: approximating functions by step functions on intervals of equal size (a discretization) requires a Lipschitz condition (consider extremely spiky functions, otherwise). This smoothness assumption is also ubiquitous in graphons and random graph kernels literature and may even be relaxed using our mollification argument (Theorem 4).
6. Proof of Theorem 4 (Q7): We will update a self-contained proof of Theorem 4 and correct the missing notations. Here, $A_n$ is the sequence of P-operators introduced in the proposition and A is the B-S original limit of $A_n$. A exists due to Cauchy-ness of $A_n$ in our metric implying Cauchy-ness in B-S’s original metric (as $(k,C_v(n))$-profiles grow to approximate $k$-profiles) and completeness of the space. In line 1011, $S_{k, L(n)}(A_n) \subset S_k(A)$ should have read $S_{k, L(n)}(A_n) \subset S_k(A_n)$, which holds since all signals considered in $S_{k, L(n)}$ are also considered in $S_k$, and concludes the other containment direction.
Minor issues:
- M6: We cited a wrong version of Ruiz et al. 2020: their IEEE TSP submission ‘Transferability properties of GNN’ has this $O(n^{-1})$ bound. It is unclear how sparse the graphs their new ‘sparse’ model ('GNN for Community Detection on Sparse Graphs') generates but personal communications and our current inspection suggests that their model is close to graphons. We will discuss this in the paper.
- M8: We are referring to their Lipschitz assumption, which directly implies good approximation of a continuous GCN signal by a discrete sampled graph signal.
- M10: Going by our definition of sparse graphs, graphon’s cut metric cannot distinguish them from the 0 graphon and thus only contains trivial limits (the 0 graphon) of such sparse graph sequence.
- M11: Positiveness refers to positive-definite operators where $\langle Ax,x \rangle > 0$ for all $x$ in Dom(A).
- M12: We do not see any immediate obstacle to this generalization.
- M13: See line 109.
- M14: $d_M$ only relies on $(k,C_v)$-profiles. For nonlinear operators one can still couple each signal with its image under the operator to define the profiles.
- M15: The **image** of the operator is only required to be measurable and can be very discontinuous.
- M16: Assumption 3A/B come with a resolution set that contains the possible sizes of our discretization. Outside of this set, the assumptions do not need to hold.
- M17: ‘with high probability’ should be deleted.
- M18: This means that 1. The sequence converges in both the B-S notion and our notion; and 2. The limits are the same.
- M19: See Line 726 and 808. One can prove $\max_x F(x) \leq M$ by proving $F(x) < M$ for arbitrary $x$.
- M20: We are bounding $d_{LP}$ directly via its definition (see line 494), and thus want to find an $\epsilon > 0$ such that the condition in the definition of $d_{LP}$ is satisfied.
- M21: See Line 495.
- M22: ${\bar{\mathcal{E}}}_{j,A}$ is a typo for ${{\mathcal{E}}^1}\_{j,A}$.
- M23: $f’’$ is Lipschitz in the induced metric on the subspace $1/n [n] \subset [0,1]$ equipped with the Euclidean metric.
- M24: The bar indicates closure of a set (of distributions) in the weak convergence sense and the limit is taken as n goes to infinity (the existence of such limit is due to existence of $\lim_n S_k(A_n)$ as shown in B-S and the fact that $S_{k, L(n)}(A_n)$ converges to $S_k(A_n)$ set-wise).
---
Rebuttal Comment 1.1:
Comment: Thank you for your clarifications and comprehensive response. In light of these, I've increased the scores for soundness and presentation by one point each, and the overall rating by two points. Both Table 1 and Figure 1 have been particularly helpful. However, I'd like to point out that the result from Keriven et al. [1] for relatively sparse graphs does not have a $n^{−1/2}$ convergence rate. Additionally, [4] presents results showing uniform convergence with respect to message passing networks, where the convergence rate depends on the dimension of the sample space. I do not assign a higher score due to my concerns about the presentation. | Summary: This paper analyzes the transferability of Graph Neural Networks (GNNs) in terms of size. It proposes Graphop Neural Networks that operate on graphop signals and discretization of the network. It proves that the discretization of graphop and the graph are close, and consequently, two discretizations of different resolutions are close. It also shows that the corresponding Graphop Neural Network outputs are close which implies the transferability of GNN in size.
Strengths: My background does not support me in evaluating the paper fairly. However, I believe the tackled problem of GNN size transferability is critical, and the author's analysis using graph limit and its discretization is promising. The assumption seems mild, which could make the theory applicable to most existing GNNs and real-world sparse graphs.
Weaknesses: The related work session is scarce, which makes it more difficult for me to understand the background. The paper is purely theoretical, which is fine. Still, it will be more sensible for readers like me if the authors can provide some form of experiment on synthetic graphs of variable size to better understand the implication of the theorem.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: - What is the key difference between this work and the graphon neural network which also adopt a graph limit perspective?
- The bounds require the number of vertices in both graphs to be sufficiently large. Then how to interpret the bound when one graph is large and the other is smaller, which is more important for graph size transferability, and also two small graphs.
Confidence: 1: Your assessment is an educated guess. The submission is not in your area or the submission was difficult to understand. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Review Summary:
Due to my lack of background knowledge, I could not appropriately relate it to previous research and assess the proof fairly. However, the paper addresses a crucial problem in graph learning and seems to provide a sound theoretical analysis, which leaves it on the borderline.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their feedback and address individual questions and concerns:
1. Scarcity of related work: Past works on size generalizability of GNNs rely either on graphons or random graph models induced by a kernel, while we focus on deterministic sparse graphs. In the latter regime, we only are aware of Roddenbury et al., 2021. This is not unexpected as many authors in the literature agree with us that the analysis of sparse and bounded-degree graphs is challenging and intricate (more in our reply to all reviewers). That said, we will include a more detailed comparison with results obtained from random graph models and extended graphon models in the final version of the paper.
2. Comparison to graphon neural network: Our work shares a lot of structure with Ruiz et al. 2020’s graphon neural network, especially in our use of graph limits. One major distinction (see table in our reply to all reviewers), is that the graphon limit of sparse and bounded degree graphs is the constant 0 graphon. This makes their approximation bound between a finite graph sampled from a graphon to the limiting graphon vacuous.
3. Requirement of a large number of vertices: our bounds are non-asymptotic, quantitative and have a specific rate of convergence, which is one of the strengths of our results. This means that our results work for all finite graph sizes. In particular, our transferability bound between a graph of size m and n states that the transferability gap scales with order $(\min(m,n))^{-1}$.
We will work on improving the accessibility of the writing in our revision. We would also love to help you understand more about our line of work, so please do not hesitate to post more questions that you think we can help you answer.
---
Rebuttal Comment 1.1:
Comment: I thank the authors' response. I believe the paper still has strong theoretical merit, while a clear presentation and application to real-world datasets will clarify the impact of the paper. Hence, I am keeping my score. | Summary: This paper explores the theoretical perspective of whether graph neural networks (GNNs) can generalize to graphs that are different from the ones they were trained on. The authors study the transferability and approximation results via graph limits, including sparse graphs such as bounded-degree or power law graphs. The paper presents various notions of graph limits and develops quantitative bounds on the distance between a finite GNN and its limit on an infinite graph. The authors also verify the regularity assumption for various graph sequences in this study. Overall, the paper's contributions include a theoretical framework for studying the generalization of GNNs to different graphs, as well as insights into the regularity of graph sequences and the use of graph limits for approximation and transferability analysis.
Strengths: S1. The paper takes a unique perspective of studying the transferability and approximation results of GNNs via graph limits. The proposed approach is novel and provides a theoretical framework for studying the generalization of GNNs to different graphs.
S2. The authors provide rigorous mathematical proofs and analysis to support their claims, which potentially have important implications for the practical use of GNNs in real-world applications.
S3. The paper's results hold for both dense and sparse graphs, and various notions of graph limits, which makes the paper's contributions applicable to a wide range of graph-based learning tasks.
Weaknesses: W1. Lack of real-world application examples: The paper focuses primarily on the theoretical aspects of graph limits and their implications for GNN generalization. However, it would be valuable to include real-world application examples where the proposed approach can be applied and demonstrate its practical utility. This would provide concrete evidence of the significance and relevance of the work in real-world scenarios.
W2. Limited experimental evaluation: While the paper includes some experimental results, the evaluation could be more extensive and diverse. It would be beneficial to include experiments on a wider range of datasets and graph structures to demonstrate the effectiveness and generalizability of the proposed approach. Additionally, providing a detailed analysis of the experimental results, including statistical significance tests and comparisons with baseline methods, would further strengthen the empirical findings.
W3. Lack of comparison with existing methods: The paper could benefit from a more comprehensive comparison with existing methods that address the generalization of GNNs to different graphs. This would provide a clearer understanding of the novelty and effectiveness of the proposed approach. Including a comparison with state-of-the-art methods and discussing the advantages and limitations of the proposed method in relation to existing approaches would strengthen the paper.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: Q1. Could you expand the experimental evaluation by including experiments on a wider range of datasets and graph structures? This would provide a more comprehensive understanding of the effectiveness and generalizability of your proposed approach. Additionally, providing a detailed analysis of the experimental results, including statistical significance tests and comparisons with baseline methods, would strengthen the empirical findings.
Q2. Can you provide real-world application examples where your proposed approach can be applied? Demonstrating the practical utility of your approach in real-world scenarios would further highlight the significance and relevance of your work.
Could you discuss the limitations of your proposed approach in more detail? Specifically, addressing the scalability of the approach to larger graphs, considering different types of GNN architectures, and exploring the impact of different graph properties on generalization performance would be interesting.
Q3. Have you considered the computational complexity of your proposed approach? It would be helpful to discuss the computational requirements and scalability of your method, especially when applied to larger graphs.
Q4. Can you provide more insights into the regularity assumption verified for various graph sequences? How does the regularity assumption impact the generalization performance of GNNs, and are there any specific conditions or properties that need to be satisfied for effective generalization?
Q5. Can you discuss potential future directions for your research? Specifically, addressing the scalability of the approach, exploring different types of graph structures, and considering the impact of different graph properties on generalization performance would be valuable areas for further investigation.
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 2 fair
Limitations: The paper does not explicitly address the potential negative social impact of the proposed approach. While the paper focuses primarily on the theoretical aspects of graph limits and their implications for GNN generalization, it would be beneficial to include a discussion on the potential ethical implications of the proposed approach. Specifically, the authors could consider the potential impact of their work on issues such as privacy, fairness, and bias in machine learning.
Additionally, the paper could benefit from a more comprehensive discussion of the limitations of the proposed approach. While the authors acknowledge some limitations, such as the need for regularity assumptions and the limited experimental evaluation, a more detailed discussion on the potential limitations and their impact on the generalization performance of GNNs would be valuable.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their feedback and would like to address the individual points.
- W1. Real-world applications of GNNs on very sparse graphs (bounded degree, linear order of edges, etc.) have been demonstrated in various disciplines elsewhere. Our paper aims to explain the transferability observed widely in practice in a rigorous theoretical framework. Theory for these very sparse graphs is extremely limited in the literature and thus, lots of existing empirical findings for them are not accounted for rigorously. For example, all molecular graphs are bounded-degree, and thus extremely few existing theories can systematically explain transferability between molecule graphs even when they share lots of structural similarities (for example graphs of polymers with different numbers of units). Other realistic graphs, such as road networks, social networks, evolutionary trees, grammar trees are also mostly sparse due to physical constraints. These have also been studied extensively elsewhere and we refer the reviewer to applied publications for these phenomena.
- W2. We believe the reviewer is mistaken since we do not include any experimental results in this paper, let alone strengthen them. Our main contribution is a theoretical analysis of transferability of GNNs, rather than a proposal of a new method or a new architecture. That being said, we will add proof-of-concept experiments to demonstrate our Theorems.
- W3. We will extensively add to our related work section. See also more details in the response to all reviewers. However, since so few theoretical works in the literature share the same emphasis as ours (transferability on very sparse graphs, which are notoriously hard to analyze), it is unclear what the ‘state-of-the-art methods’ are.
- Q1. Please refer to the reply to W2 above.
- Q2. Relevance of our analysis to real-world applications: please refer to the response to W1. Scalability of our approach to larger graphs: since our bounds are quantitative and non-asymptotic, they apply to graphs of any size, large or small, and even graph limits. In this paper, we only consider spectral GNNs to demonstrate some main takeaways in terms of the metric and the corresponding convergence. However, we believe our approach works for other GNN models, such as graph isomorphism networks (GIN). We also agree that studying how different graph structures impact rate of convergence under the metric d_M is an interesting direction forward, but is outside the scope of this paper.
- Q3. The main contribution of this paper is a convergence and transferability analysis of existing GNN models, as opposed to introducing new models. That said, we will provide small experiments demonstrating our Theorems in the final version of the paper.
- Q4. For a discussion of our assumptions, please refer to the general address to all reviewers.
- Q5. Please refer to the “Implication for future empirical and theoretical work” section in our reply to all reviewers.
Limitations: we will address some ethical concerns of GNN developments. That said, seeing that our work theoretically verifies observed transferability experiments done elsewhere, we share the same ethical concerns with most papers in the GNN literature without meaningful deviations.
---
Rebuttal Comment 1.1:
Title: Acknowledgement of rebuttal
Comment: I thank the authors for the replies. However, some of the replies only amplified my concerns.
- W1. I understand the theoretical nature of this work, but it is still beneficial for even a very theoretical work to clearly discuss its application domains, to allow proper leverage of the theoretical findings. I see the authors mention molecular graphs, road networks, grammar trees etc. These are good application examples themselves, but it looks like the authors are rather reluctant to properly discuss them in the paper.
- W2. It is good to see that the authors have plans to add some proof-of-concept experiments, but with the current lack of details, I cannot tell what kind of experiments can be added and how helpful they would be to support the theoretical findings.
- W3. "State-of-the-art" means anything that exists and can best solve the problem at hand. I hoped the authors could clearly discuss the status of existing studies and enhance the discussions in the paper, instead of saying "it is unclear what the state-of-the-art methods are". A statement like this is not appropriate in the rebuttal, and certainly even more inappropriate to appear in a paper.
Due to these amplified concerns, I had to reduce my rating.
---
Reply to Comment 1.1.1:
Title: Regarding "Acknowledgement of rebuttal"
Comment: We thank the reviewer for the second reply and will now address the extra concerns highlighted:
- W1. We agree with the reviewer that it is important to highlight the applications that can benefit from our theoretical analysis. We will add a discussion of those to the paper.
To the best of our knowledge, ours is the first paper to rigorously prove transferability of sparse graph sequences. Therefore, all applications that involve size transferability of GNNs on graph sequences satisfying our assumptions (see Table 1 in the additional rebuttal pdf) are directly verified by our theorems. Furthermore, by using a framework general enough to include graph sequences of various sparsity, and by demonstrating that non-regular graph sequences too (e.g. polymer graphs, see Table 1 and Figure 1) can satisfy our assumptions, we argue that our theoretical findings can impact many future applications. These discussions will be present in the revision of the paper. We would like to reiterate that our examples of molecular graphs, road networks, grammar trees are all sparse, bounded-degree graphs, which are what our results cover, while most existing theoretical approaches do not cover them.
- W2. We will verify the statement of our theorems by sampling graphs of various sizes from a limiting object, and then empirically checking the distances between the GNNs viewed as operators. We will also design robustness tests to understand the extent our results still hold or fail as our theoretical assumptions on the graph structures are violated. This will give us a further understanding about the results that is useful for practitioners.
- W3. We believe there was a misunderstanding in our statement regarding comparisons to state-of-the-art methods that we hope to clarify. The current theoretical state-of-the-art is summarized in Table 1 and discussed in the “Related work” session of the main paper. Among the existing works, none gives quantitative bounds for multilayer GNNs while covering different graph sparsity regimes - which is the setting of our paper. For example, for bounded-degree graphs (first column of Table 1), only one other paper gives transferability guarantees for 1-layer GNN, and with an inexplicit bound. We will add this to the paper, along with a more detailed discussion and comparison to random graph frameworks (see “Deterministic vs random graph” in our reply to all reviewers). If the reviewer is asking for a comparison to state-of-the-art methods in applied GNN research: our paper currently does not suggest any modifications to these pipelines used by practitioners. Rather, we seek a deeper understanding of existing methods and datasets by providing an analysis of how they would perform as they are applied to graphs of a different size as the training data. Our analysis is complementing and explaining empirical analyses. | Summary: This work is essentially extending the existing theoretical study of GNNs through graphons by replacing them with graphops, an operator view on graphs that allows for results of sparse graphs. Then, standard results on size generalization for instance, are presented.
Strengths: The paper is technically sound as far as I could check. The theoretical contributions are non-trivial and present enough mathematical rigor for this venue. It is a first step towards moving being graphon-restricted analysis of GNNs.
Weaknesses: The issue I have with this paper is its usefulness for machine learning applications. We understand the space L^1([0,1]^2) pretty well for graphons, but the newly introduced operator view from Backhausz and Szegedy isn't clear to me yet. Could the authors give the readers more intuitions on how P-operators can generate real-world graphs? Remember that in the end of the day the GNN is applied to real-world graphs. Graphons can be seen as latent factor models, which is somehow interpretable as a graph generating process. How can P-operators be seen by the readers? This might be a flaw in my understanding that the authors can help me with, so I'm willing to raise my score. In particular how can (k,L)-profiles be interpreted in real-world? Graphs as operators make sense in a lot of fields, but I'm a little concerned that graph data isn't fit to this view.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Could the authors give the readers more intuitions on how P-operators can generate real-world graphs?
how can (k,L)-profiles be interpreted in real-world?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: NA
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their feedback. Regarding the questions and concerns:
1. On the view of graphs as P-operators: We first want to note that P-operators are very general operators satisfying a bounded (operator) norm condition. Viewing graphs as operators is not unfamiliar to finite graph analysis: the adjacency matrix of an n-vertex graph is a matrix and thus, a linear operator $A$ from $\mathbb{R}^n$ to $\mathbb{R}^n$, whose image describes a single layer of message passing algorithm: if $f \in \mathbb{R}^n$ is a vector of node features then $Af$ is the output of a single layer of message passing. The analysis of graphons W (dense graph limits) also makes use of properties of the Hilbert-Schmidt operator $H$ defined as $Hf(y) = \int_0^1 W(x,y)f(x) dx$. Similarly, the image of this operator is the pre-activation of a single layer of graphon neural networks. In this view, one can even bypass the use of a generation model (kernels/graphons) and directly model graphs by the images (actions) of their adjacency matrices or their transformations.
2. P-operators in real-world graphs: While dense graph limits such as graphons are amenable to more analysis tools, we would argue that P-operators as graph limits are more realistic for real-world graphs, albeit with a more difficult analysis - as you have seen in this paper. This is because real-world graphs are very rarely dense (at least quadratic order of edges) or even relatively sparse ($n\log n$ order of edges). Molecule graphs, evolutionary trees, grammar trees, social networks, for example, are usually sparse, bounded-degree even, because of real-world physical constraints. Thus, P-operators capture more realistic graphs in real-world datasets. Furthermore, graphon limit model is not enough: by definition, under the graphon model, the limit of these sparse graphs vanishes and cannot be distinguished from the constant 0 graphon, making their bounds trivial. It is also worth pointing out that both finite graphs and graphons are examples of graphops (which in turn are P-operators), so we have a larger graph limit model altogether.
3. $(k,L)$-profiles real-world interpretation: It is easier to gain intuition in real-world examples by considering $k$-profiles. After all, in Section 4.4 we show that with minimal overhead, $(k,L)$-profiles yield the same convergence as $k$-profiles. When $k = 1$, an element of a $1$-profile of a finite $n$-vertex graph is a joint probability distribution of the form $(X, Y)$ where $X$ is drawn from an empirical distribution supported on some node features $f$ and $Y$ is drawn from an empirical distribution supported on the image of $f$ after $1$ round of message passing on the graph. The $1$-profile is then the set of such distributions when $f$ is taken over a set of reasonably regular (measurable, bounded, Lipschitz) node features. Therefore, the $1$-profile couples the input and output of a message passing layer and captures their dependence structure across all reasonable node feature vectors. Please refer to Backhausz and Szegedy’s original paper for some illustrations of $k$-profiles. We will provide our own illustrations in the next revision of the paper.
4. Graph generation via P-operators: Please refer to the reply to all reviewers section.
---
Rebuttal Comment 1.1:
Title: ack rebuttal
Comment: Thank you for the reply. I think the paper still needs intuition on the relationship to real-world graph data, that was not provided in this reply. The paper still has valid theoretical contributions, so I keep my score. | Rebuttal 1:
Rebuttal: We thank the reviewers for their feedback and would like to emphasize:
1. Our work establishes quantitative, non-asymptotic transferability bounds for a wide range of sparse graphs, which are notoriously hard to analyze. Please refer to Table 1 in the additional rebuttal pdf file for a concise comparison to related works. This will be in the final version of the paper.
2. Sparse graphs are notoriously difficult to analyze:
- Regularity assumptions are likely necessary: Although some of our assumptions (3A and 3B) seem restrictive, only one of them needs to hold for the results to go through. Furthermore, an unconditional proof of our Theorem 2 would solve the Aldous-Lyon conjecture for bounded-degree graph limits. While this does not prove such a result is impossible, we want to point out that graphops convergence is intricate and nontrivial to analyze.
- Pathological behavior of graph spectrum in the limit: It is well known that for many bounded-degree graph sequences, eigengaps do not converge and that eigenvalues and eigenvectors may not even exist for limiting operators. Therefore, these sequences are not amenable to spectral methods that use eigenvalue/eigengap convergence. (e.g. Levie et. al, 2021)
3. Sparse graphs and bounded-degree graphs are more realistic in real-world models. See Table 1 for examples that satisfy our assumptions. We have recently verified that polymer graphs (refer to Figure 1 in the additional rebuttal pdf for an illustration of this graph), which model the repetitive structure of polymer molecules, satisfy our assumptions. Unlike other example graphs in the paper, polymer graphs are not necessarily regular graphs.
4. Deterministic vs random graph: at the time of writing, we considered these two distinct approaches to size transferability. While random graph models are more flexible and usually have stronger results and wider tools for analysis, results for random graph models usually only hold with high probability. In comparison, deterministic graph sampling is more suitable if there is already a deterministic sequence of graphs one has in mind (for example, images of certain resolutions) and wants to compute transferability bounds between them. Results for deterministic graph sequences must also hold in the worst case since there is no randomness to downplay the probability of sampling these graphs. However, seeing that the techniques in both approaches are similar in existing literature, in the final version of the paper, we will include a more thorough discussion of random graph models.
5. Experiments: Comprehensive empirical studies of transferability on sparse graphs have already been done extensively in the literature, especially since sparse graphs are more common in real-world datasets than dense ones. We will add proof-of-concept experiments that verify some of our results in the final version of the paper.
6. Implication for future empirical and theoretical work: Since our approach to analyzing very sparse graphs is very new, there are lots of exciting avenues to consider in the future, both in practice and in theory. By introducing P-operators, graphops, and the accompanying metric (with our relaxation to $(k, L)$-profiles for easier use), we contribute to the toolbox of techniques used to analyze and compare GNNs of different graph sizes. This may have impacts on other aspects of GNNs, such as expressiveness (universality theorems under this metric) or proving new generalization bounds beyond size transferability. As our results apply to a much wider range of graph sequences than previous ones, they may also serve as a unifying framework for future studies. From an empirical standpoint, our results ensure size transferability under appropriate structural assumptions (they are sampled from the same regular graphop) between two graphs. Testing how robust size transferability is to failure of such structural assumptions will help practitioners predict transferability and non-transferability of sparse graph sequences. Finally, the connection between sparse graph convergence and weak convergence of distributions is yet another rich future direction. These two concepts are tied together since the $d_M$ metric is built on top of Levy-Prokorov distance ($d_{LP}$), which metrizes weak convergence and most of the proofs boil down to bounding $d_{LP}$. This connection may allow techniques used in optimal transport or Wasserstein distance computation to benefit this line of work. More directly related to this work, generalizing our results to higher dimensional node features appears to be straightforward. Further relaxing Assumptions 3A and 3B is also interesting and would greatly improve this approach.
7. Graph generation via P-operators: The discretization scheme introduced for graphop can be roughly understood as partitioning the vertex set into finitely many sets and merging nodes and their connections in these sets, with an appropriate scaling of the edge weights. Imagine blurring an n by n matrix into an n/2 x n/2 matrix by doing some form of average pooling over each distinct 2x2 square. This makes rigorous the notion of making a high-resolution graph on a huge number of vertices more ‘blurry’ by merging nodes in a way that still maintains some smoothness conditions of Assumptions 3A/3B, which is reminiscent of the real-world procedures of training with low-resolution images before fine-tuning with higher-resolution ones, or sampling from low-frequency graph Fourier transform domain. We will clarify these intuitions in the final version of the paper.
Finally, we note that all citations that appear in our rebuttals can be found in the reference section of the current version of the paper, except that Ruiz et. al, 2020 now refers to their IEEE TSP submission ‘Transferability properties of GNN’ available on arXiv.
Pdf: /pdf/a9b39bd54032cc18b33c8e29d313a46a81a6bcf0.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Can Pre-Trained Text-to-Image Models Generate Visual Goals for Reinforcement Learning? | Accept (poster) | Summary: The objective of this paper is to harness the capabilities of pre-trained text-to-image models and image editing techniques in order to facilitate robot learning. This is achieved by leveraging these tools to modify the current scene towards the intended image objective.
Strengths: The paper presents a novel approach that harnesses the knowledge embedded in large pre-trained generative models to facilitate zero-shot guidance for robot learning tasks. Additionally, the paper asserts that its edited image functionality outperforms existing methods. Lastly, the paper identifies a gap in current text-conditioned image generation models.
Weaknesses: 1. Figure 1 can be enhanced to effectively communicate the problem the author intends to address, while also succinctly highlighting their contribution and providing an overview of their proposed method.
2. In recent years, significant advancements in diffusion models have led to the emergence of notable works such as CACTI, GenAug, and ROSIE. These works have demonstrated considerable improvements in generative modeling and learning. However, none of these works were used as baselines in the paper, including DALL-E-Bot, which bears the closest resemblance to this work. While acknowledging technical differences, it would be valuable to compare and contrast the proposed method with these related works.
3. I appreciate the author's effort to compare various methods of image editing with their proposed approach. However, many of these methods are pretrained on different datasets, necessitating the evaluation of their performance on a comprehensive image editing dataset for a more equitable and meaningful comparison.
4. The author's inclusion of both simulation and real-world demonstrations is commendable. However, to enhance the persuasiveness of the findings, it would be beneficial to introduce slightly more complex tasks or incorporate additional distractors during evaluation to assess the robustness of the proposed method.
5. The author may also consider exploring the line of work that involves generating demonstrations using video editing techniques for robot learning, as exemplified in "Human-to-Robot Imitation in the Wild," RSS 2022.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: I hope the authors can address all the questions and concerns i have in the weakness section.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 2 fair
Contribution: 1 poor
Limitations: Yes, the authors have addressed all the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear **Reviewer WQFw**, we thank you for your detailed and thorough review. In the following sections, we seek to address each of your concerns.
---
**Q**: Figure 1 can be enhanced to effectively communicate the problem the author intends to address, while also succinctly highlighting their contribution and providing an overview of their proposed method.
**A**: We thank the reviewer for this suggestion and the updated Figure 1 is shown in Figure 5 of the attachment. We added a subfigure picturing what the problem is that we are trying to address.
**Q**: In recent years, significant advancements in diffusion models have led to the emergence of notable works such as CACTI, GenAug, and ROSIE. These works have demonstrated considerable improvements in generative modeling and learning. However, none of these works were used as baselines in the paper, including DALL-E-Bot, which bears the closest resemblance to this work. While acknowledging technical differences, it would be valuable to compare and contrast the proposed method with these related works.
**A**: CACTI, GenAug, and ROSIE use diffusion models to perform data augmentation on existing expert demonstrations, improving the robustness and generalization ability of the manipulation policy. It should first be noted that LfVoid aims to solve a different problem when compared to this line of work: we utilize the large-scale diffusion model to translate language instructions into corresponding goal images. Therefore, the outcome of solving the task is not provided by the demonstrations but by the large-scale pre-trained model. That being said, the distinction between LfVoid and previous works is clear. While acknowledging the differences, the use of diffusion models to perform image editing is similar in these works and LfVoid. The editing modules in this line of work mainly perform localized image inpainting conditioned on human-specified or automatically generated object masks. Since the models used in these works are not open-sourced, we cannot directly compare LfVoid with them.
DALL-E-Bot is a task-specific method that can only solve object rearrangement tasks, and thus its policy cannot generalize to our manipulation tasks. However, we are able to compare the goal generation method proposed by DALL-E-Bot with LfVoid. More specifically, we provide human-specified masks to DALL-E 2 model and the text prompt describing the edited image, and report the qualitative results in Figure 1 and Figure 2 and the quantitative results in Table 1 and Table 2 of the attachment. Despite DALLE-2 having the advantage of a user-specified mask (the region outside the mask will remain unchanged), LfVoid still has better or comparable performance in all the tasks.
**Q**: I appreciate the author's effort to compare various methods of image editing with their proposed approach. However, many of these methods are pre-trained on different datasets, necessitating the evaluation of their performance on a comprehensive image editing dataset for a more equitable and meaningful comparison.
**A**: We would like to point out first that both LfVoid and Imagic are **fine-tuning** methods, not models, that can be used on any pre-trained diffusion models. To ensure a fair comparison, we use the same open-source StableDiffusion model(v1-4) in our experiments when evaluating LfVoid and Imagic.
Secondly, while Instruct-pix2pix is indeed a released model, it also builds upon the open-source StableDiffusion model: it fine-tunes the model parameters with generated paired data so that the model can perform editing based on an input image and a text instruction.
Therefore, we believe that our comparison between these methods is reasonable since all three editing methods build upon the same pre-trained StableDiffusion Model.
**Q**: The author's inclusion of both simulation and real-world demonstrations is commendable. However, to enhance the persuasiveness of the findings, it would be beneficial to introduce slightly more complex tasks or incorporate additional distractors during evaluation to assess the robustness of the proposed method.
**A**: We have added a distractor object to push-sim, and have introduced three real-world tasks using the UR5 robot arm and reported the goal generation results in Figure 3 in the attachment.
**Q**: The author may also consider exploring the line of work that involves generating demonstrations using video editing techniques for robot learning, as exemplified in "Human-to-Robot Imitation in the Wild," RSS 2022.
**A**: LfVoid uses image editing methods to generate the goal frame for RL learning, while WHIRL uses video inpainting methods to align human interaction videos and robot interaction videos, and therefore can measure how good the robot performs compared with target human videos. The editing methods create a representation space for this objective function which selects the best robot interactions to perform imitation learning. The comparison with this line of work is very interesting and we will add this line of work to the references.
We kindly refer reviewers to the general response section for common questions. If there's anything unclear in the above, we're more than glad to further discuss the details.
---
Rebuttal Comment 1.1:
Title: Thank you
Comment: I would like to thank the authors for the clarification and response.
---
Reply to Comment 1.1.1:
Title: Further Discussion
Comment: Dear reviewer WQFw,
Thank you for acknowledging our clarification and response.
Should there remain any unresolved concerns of the paper you feel need further explanation, please let us know. We are eager and open to further discussions to ensure clarity and comprehension.
If you find that our responses have adequately addressed the initial concerns, we kindly request you to consider revisiting the rating of the paper. Your insights and evaluation are crucial to us, and we sincerely hope our efforts align with your expectations. | Summary: Post rebuttal updating score to 5
This work proposes to use generated visual goals for RL. A diffusion model based approach is used to edit visual observations based on text prompts. The proposed image editing approach is shown to be better than prior text-based image editing approaches based on a human evaluation. Visual goals predicted using the proposed method shows improved RL performance compared to baselines that use other image editing mechanisms.
Strengths: * Leveraging pre-trained models for image editing is an interesting idea
* Proposed approach seems to work better than some prior editing approaches
* Some indication that the generated visual goals help learning
Weaknesses: Originality: Imagining visual goals for RL has been explored in prior work (e.g., [1] https://arxiv.org/pdf/1807.04742.pdf), which limits the originality of this work. The use of diffusion models could be new.
Significance:
* Experimental results do not demonstrate the need for visual goals. For instance, there are no baselines in the RL experiments which are not based on imagined goals.
* Very few tasks are tested
* Baseilnes are inadequate
* Approach is too specific to the tasks considered and generality of approach is questionable
Presentation/Clarity
* Need better clarity on motivation, background on methods, problem formulation and approach overview
* There needs to be a clear motivation/overview section before section 3. Technical details lack context and are difficult to follow. For instance, section 3 talks about latent diffusion models without providing any background/motivation. There needs to be a technical overview section on diffusion models.
* Method description is vague and mathematical details are missing.
- 3.1.1 Description of the optimization approach is vague.
- 3.1.2 What is inversion and why is it necessary?
- 3.1.3 Assumes prior knowledge and familiarity with diffusion models. What is x_0? What does it mean to replace attention maps?
Experiments
* Is the comparison on image editing fair? The proposed method is fairly specific to the tasks considered while it is compared against general purpose image generation/editing techniques such as pix2pix.
* Inadequate baselines for the RL experiment: There seem to be no baselines that are not based on imagined goals/image editing.
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: * Many details about the setting are unclear. Need better clarity on the following.
- Where do the text prompts/instructions for editing come from?
- What are the inputs and the outputs of the components of the approach?
- How does learning work?
* Is it possible to use any quantitative metrics for image editing experiments (e.g., based on CLIP encoders)?
* For the manually defined visual goals how many images were manually created? How many task instances are there altogether?
* line 297 mentions ‘a certain level of prompt tuning is needed’, but such details are missing from the main text.
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 2 fair
Limitations: See above
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear **Reviewer NX6j**, we thank you for your detailed and thorough review. In the following sections, we seek to address each of your concerns.
---
**Q**: Experimental results do not demonstrate the need for visual goals: no baseline in the RL experiments that are not based on imagined goals.
**A**: We would like to point out that the CLIP baseline mentioned in line 260 is not based on imagined goals. It only requires a user-specified text prompt and the CLIP distance between the text prompt and observation image is used as a reward to train RL. Results show that LfVoid clearly outperforms it, demonstrating the advantage of providing visual guidance.
**Q**: There needs to be a clear motivation/overview section before section 3. Technical details lack context and are difficult to follow.
**A**: We appreciate your advice and will polish the paper to improve the delivery in the final version. Particularly, we will improve the corresponding part of the paper. Due to the length limit, it's hard to go into too much detail about the technical backgrounds of the diffusion model, and it's common practice to refer the readers to relevant papers for more detail, including the image editing baselines used in LfVoid [1,2,3], nevertheless, we agree that it's always better to provide sufficient background in the paper itself.
**Q**: Descriptions of the optimization approach is vague.
**A**: For the optimization method used in goal generation, we use the exact optimization methods used in the released code of DreamBooth and Null-text Inversion: AdamW for DreamBooth and Adam for Null-text Inversion. The other parameters are all the default values provided by the official code.
**Q**: What is inversion and why is it necessary?
**A**: We need to invert the provided source image to a diffusion process so that our editing module can generate the target image conditioning on both the source image and the text prompt. Diffusion models could only synthesize a target image conditioning on a text prompt. Through inversion, we can extrinsically condition the generation process on both the source image and the text prompt[3]. We provide results where we eliminate the inversion module completely: Dreambooth P2P in Figure 1 and Dreambooth P2P-DD Figure 2 in the attachment, and observe that the generated target image differs greatly from the source image.
**Q**: What is x_0? What does it mean to replace attention maps?
**A**: $x_0$ denotes the final generated image at the last diffusion step. During each generation step, the diffusion model uses cross-attention to process the image and the text prompt, and attention maps are produced. We can replace the attention maps between the diffusion processes that separately generate the source and target image. Please refer to line 130 and the original paper[1] for more details.
**Q**: Where do the text prompts/instructions for editing come from?
**A**: They are provided by humans, which is a description of the desired goal image.
**Q**: What are the inputs and outputs of the components of the approach?
**A**: For the Feature Extracting Module, the input is several initial visual observations and a prompt describing the object we want to remember. The output is a fine-tuned model using the <sks> token to represent that object.
For the Inversion Module, the input is the source image and the text prompt description. The text prompt can contain the <sks> token to better describe the scene. The output is a noise vector and a series of optimized null-text embeddings for each diffusion timestep.
For the Editing Module, the input includes both the input and output of the Inversion Module, as well as the text prompt describing the target image(for appearance-based editing), or the bounding box and trailing tokens(for structure-based editing). The output is the edited image.
We will include this in Section 3.1.
**Q**: How does learning work?
**A**: We assume this is asking about the procedure of the RL. Generally speaking, it consists of two iterative procedures. Given a set of generated goal images, a classifier is trained to distinguish the goal image and observation encountered by an RL agent from a replay buffer. Meanwhile, the RL agent is optimized to gain more reward from the classifier to achieve the states that assemble the goal image, granting it the ability to reach the desired goal during test time.
**Q**: For the manually defined visual goals how many images were manually created?
**A**: In the pipeline of LfVoid, there are no manually defined visual goals since LfVoid can synthesize the visual goals requiring only a user-defined text prompt and/or bounding box. For evaluation purposes, we sample 1024 visual goals to test the upper bound of LfVoid and report this result as a baseline.
**Q**: Unfair comparison to general purpose methods.
**A**: We would like to argue that the goal-generation method is a general editing method that can perform various editing tasks that are not limit to robotics context and therefore the comparison on image editing is fair. Secondly, the example-based learning of LfVoid is also a general-purpose method compatible with any visual RL setting. The design choices and algorithms of LfVoid are not specific to those tasks.
We kindly refer reviewers to the general response section for common questions. If there's anything unclear in the above, we're more than glad to further discuss the details.
[1] Hertz, Amir, et al. "Prompt-to-prompt image editing with cross attention control." arXiv preprint arXiv:2208.01626 (2022).
[2] Kawar, Bahjat, et al. "Imagic: Text-based real image editing with diffusion models." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2023.
[3] Mokady, Ron, et al. "Null-text inversion for editing real images using guided diffusion models." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2023.
---
Rebuttal Comment 1.1:
Title: Thank you for the response
Comment: I appreciate the response.
Clarity issues were partially addressed by the author response. I strongly recommend improving the clarity of the draft to make it more accessible to a wider audience.
---
Reply to Comment 1.1.1:
Title: Thank you
Comment: We thank the reviewer for the effort and the positive feedback. We will continue to polish the paper in the final version. | Summary: This paper demonstrates a novel way to utilize existing large-scale text-to-image models for robot learning. Specifically, they modify existing pre-trained text-to-image models to produce visual goals for example-based reinforcement learning, before learning policies from the generated images. Experimentation in both simulation and the real world shows that the proposed modifications to pre-trained text-to-image models are able to generate higher-fidelity images than the baselines, and that this also leads to gains in downstream performance.
Strengths: - Well-written. The paper is well-written, and the figures aid in the understanding of the methodology.
- Novelty. The proposed approach seems to be a novel one which builds on ideas from the literature.
- Real world experiments. The inclusion of real world experiments for both the image goal generation and the RL training make the strength of the proposed approach a lot more convincing than otherwise.
- Strong results. The proposed approach outperforms baselines from prior work as well as the ablations. The qualitative results provided in Figures 4 and 5 display impressive performance of the model.
Weaknesses: - Evaluation. While both qualitative and quantitative metrics for image goal generation are included, the worry is that the qualitative examples may be cherry-picked and that the user-study done for a quantitative assessment may not be extensive enough. Would it be possible to use the standard metrics which conditional image generation papers utilize to quantitatively evaluate the proposed approach? For instance, once the ideal goal image has been generated, could the output of the proposed method be compared to using either MSE or LPIPS [1]? Furthermore, would it be possible to provide more qualitative examples? There are about three more examples per task in the appendix, but these all seem to be very similar to one another and limited in diversity. Particularly, more visualizations of the real world images would be appealing as this setting is much harder than the simulated images.
- Prompt tuning. Ideally, at test-time, you are just given the current image, and a text-based goal prompt, nothing else. Line 296 says that "a certain level of prompt tuning is needed in order to achieve optimal editing performance." How much prompt tuning was done for the method? Was the same done for the baselines?
[1] Richard Zhang, Phillip Isola, Alexei A Efros, Eli Shechtman, and Oliver Wang. The unreasonable effectiveness of deep features as a perceptual metric. In IEEE Conf. Comput. Vis. Pattern Recog., 2018.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: - Qualitative Results. While the language instructions for Imagic and the proposed method are the same, ensuring a fair comparison, why are they different for InstructPix2Pix? Furthermore, why is the ablation only done for Figure 5, and not for Figure 4?
- RL Experiments. Were all hyper-parameters in the RL training the same across methods? In Figure 7, the number of trajectory steps seems to vary across tasks -- why is this the case? Are the trends different if more trajectory steps were allowed?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 4 excellent
Limitations: Yes, the authors have adequately addressed the limitations of the work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear **Reviewer pQmz**, we thank you for your detailed and thorough review. In the following sections, we seek to address each of your concerns.
---
**Q**: Would it be possible to provide more qualitative examples? There are about three more examples per task in the appendix, but these all seem to be very similar to one another and limited in diversity.
**A**: We have provided more diverse qualitative results of LfVoid in Figure 7 of the attachment. Moreover, we add three additional real-world environments with a UR5 robot arm and one simulation task with a distractor to further increase the diversity. The results are shown in Figure 3 of the attachment and we observe that LfVoid can successfully perform all the desired edits in various environments.
**Q**: Why is the ablation only done for Figure 5, and not for Figure 4?
**A**: We thank the reviewer for this question and have added additional ablations for both Figure 5 and Figure 4. Please refer to Figure 1 and Figure 2 in the attachment. It should be noted that LED(Sim) and LED(Real) can perform the desired editing without the DreamBooth token and therefore we did not use DreamBooth when generating the goal images of these two environments.
**Q**: While the language instructions for Imagic and the proposed method are the same, ensuring a fair comparison, why are they different for InstructPix2Pix?
**A**: Both LfVoid and Imagic expect the text prompt to directly describe the goal image, while InstructPix2Pix expects the text prompt to describe the editing instructions to perform on the input image. This results in a slightly different prompt for these methods.
**Q**: Were all hyper-parameters in the RL training the same across methods?
**A**: Yes, the hyper-parameters are the same and reported in Table 6 in the Appendix.
**Q**: In Figure 7, the number of trajectory steps seems to vary across tasks -- why is this the case? Are the trends different if more trajectory steps were allowed?
**A**: This is because different tasks require different numbers of steps to finish when humans are collecting the demonstrations. For Wipe-Real, we found that it needs about 18 steps to completely wipe off the markings. And for LED-Real the number is 16, for Push-Real the number is 14.
We kindly refer reviewers to the general response section for common questions. If there's anything unclear in the above, we're more than glad to further discuss the details.
---
Rebuttal Comment 1.1:
Comment: Thanks for the response. The authors' clarifications have addressed all of my questions.
---
Reply to Comment 1.1.1:
Title: Thank you
Comment: We thank the reviewer for the feedback. | Summary: In this paper, authors propose a new method called LfVoid to tackle RL tasks. LfVoid can generate consistent visual goal frames through using pretrained LDM and subsequently train a discriminator to output rewards for downstream RL task. In order to improve editing consistency, DreamBooth, null-text inversion, P2P and Directed diffusion are incorporated into the pipeline.
Strengths: 1. This paper is well written, clear and easy to understand.
2. An organic integration of several techiniques for controlled generation
3. Compared with existing editing methods, only the proposed LfVoid correctly modifies the desired objects and makes no obvious changes to other irrelevant things.
4. The improved generative quality subsequently enhance the performance of example-based RL.
Weaknesses: 1. LDM is kind of overkilling in this benchmark.
2. Lack comparison with other RL methods leveraging goal frame generation
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: 1. The evaluation benchmark is a little simple which more or less wastes the strong expressive power of pretrained LDM. What is the advantage of using LDM in your application compared with training a latent VAE to generate the goal frame except for less training frames?
2. As mentioned in summary, 4 techniques, including DreamBooth, null-text inversion, P2P and Directed diffusion, are incorporated but the ablation study only compares null-text P2P-DD and null-text P2P. In my opinion, the effects of DreamBooth and null-text inversion are overlapped. Can you provide any experiment to validate the improvement brought by DreamBooth token? Also, I understand the difficulty of ablation study in this work due to the lack of effective quantitative comparison. Is it possible to design some new metrics that only focus on the region of interest since the bounding boxes are provided?
3. Comparison with existing RL methods leveraging goal frame generation, for example, Goal-Aware Prediction[1], was not provided. Can you compare LfVoid with these baselines or explain why this is not meaningful?
[1]. Nair, S., Savarese, S. & Finn, C.. (2020). Goal-Aware Prediction: Learning to Model What Matters. <i>Proceedings of the 37th International Conference on Machine Learning</i>, in <i>Proceedings of Machine Learning Research</i> 119:7207-7219 Available from https://proceedings.mlr.press/v119/nair20a.html.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations: This paper solved the image editing problem required for goal frame generation. A solid contribution was made. I would increase my rating if my concerns are properly addressed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear **Reviewer jYPU**, we thank you for your detailed and thorough review. In the following section, we seek to address each of your concerns.
---
**Q**: "LDM is kind of overkilling in this benchmark." and the advantage of LDM over VAE to generate the goal frame.
**A**: In LfVoid, the LDM bears the task of **editing** the current observation based on a **text description**, while keeping the irrelevant parts of the image **unchanged**. While there exist works like [1] that utilize CVAEs to generate text-conditioned images or use VAE along with GAN for text-conditioned image editing [2], their ability to fulfill all three requirements of LfVoid is limited. To the best of our knowledge, the pipeline proposed by LfVoid that combines LDMs with inversion, attention control, and object specification is the only method that makes the downstream example-based RL procedure possible. We attribute this to large LDM models’ better understanding of the semantic meaning of the world.
We also explored the line of work that uses VAEs to generate random frames and train a goal-conditioned policy to fulfill a user-provided goal image during test time. The generated image (randomly sampled from the latent space and decoded) can be seen in Figure 6 of the attachment, it's clear that those images are blurry and lose much of the details, which is unbearable for downstream example-based RL methods.
**Q**: "Lack comparison with other RL methods leveraging goal frame generation, for example, Goal-Aware Prediction[3], was not provided."
**A**: Thank you for your advice. This is indeed reverant to our method and we do think the comparison with such methods is valuable. We have conducted two additional experiments, Goal-Aware Prediction(GAP) and its predecessor work Visual Reinforcement Learning from Imaginary Goals (VIG)[4].
GAP first collects transitions from the environment using a random policy, then trains a dynamic model in latent space on these data. During test time, GAP uses model predictive control to achieve a **user-provided** goal image. VIG collects a random dataset as well and trains a VAE over the image observations, then trains a goal-conditioned RL agent to fulfill sampled goals from the VAE.
We want to point out that the goal image (at test time) in both settings is user-provided, which means the user has to figure out one way to achieve the goal state before ever having a feasible policy to achieve it. What's more, since both these method uses random policy to collect data, it's hard for them to have a faithful knowledge of the dynamics or distribution over the goal image since it can be difficult for the random policy to ever reach such a goal state (like wipe all the markings off the table)
In the additional results, we provide an oracle goal image (as contrust to the edited images from LfVoid) to both methods to make them fulfill the goal state. The rewards of the three simulated tasks can be found in Table 5 in the attachment. It's clear that they can not achieve any of the tasks despite providing the oracle goal observation.
**Q**: "As mentioned in summary, 4 techniques, including DreamBooth, null-text inversion, P2P and Directed diffusion, are incorporated but the ablation study only compares null-text P2P-DD and null-text P2P"..., "Can you provide any experiment to validate the improvement brought by DreamBooth token?"
**A**: We thank the reviewer for this question and have provided more ablation studies in Figure 1 and Figure 2 of the attachment. In particular, we report the results of removing the DreamBooth module (Null-text P2P and Null-text P2P-DD) and observe that the DreamBooth token contributes significantly to preserving the background details and performing the desired edit. Additionally, we report quantitative ablation results in Table 3 and Table 4 of the attachment.
We kindly refer reviewers to the general response section for common questions. If there's anything unclear in the above, we're more than glad to further discuss the details.
[1] Zhang, Chenrui, and Yuxin Peng. "Stacking VAE and GAN for context-aware text-to-image generation." 2018 IEEE Fourth International Conference on Multimedia Big Data (BigMM). IEEE, 2018.
[2] Pernuš, Martin, et al. "Fice: Text-conditioned fashion image editing with guided gan inversion." arXiv preprint arXiv:2301.02110 (2023).
[3] Nair, Ashvin V., et al. "Visual reinforcement learning with imagined goals." Advances in neural information processing systems 31 (2018).
[4] Nair, Suraj, Silvio Savarese, and Chelsea Finn. "Goal-aware prediction: Learning to model what matters." International Conference on Machine Learning. PMLR, 2020.
---
Rebuttal Comment 1.1:
Title: Response to rebuttal
Comment: Most of my concerns are well addressed. Therefore, I am happy to increase my rating to accept.
---
Reply to Comment 1.1.1:
Title: Thank you
Comment: We thank the reviewer for the effort and the positive feedback. | Rebuttal 1:
Rebuttal: Dear reviewers, we appreciate all your helpful feedback. In this response, we address the common questions and comments. We welcome further discussion with each reviewer to address any remaining concerns.
We’d like to thank Reviewer jYPU for acknowledging that our method is “An organic integration of several techniques for controlled generation” that “subsequently enhance the performance of example-based RL”, and Reviewer pQmz for acknowledging our work is "A lot more convincing" since the introduction of real world experiments and demonstrates "impressive performance".
Our work (LfVoid) explored the possibility of utilizing the intrinsic knowledge embedded in pertained latent diffusion models (LDM) to provide visual guidance for robotic reinforcement learning. Starting from the current observation and language instructions on the desired goal, LfVoid trains an RL agent to achieve the desired goal with **zero need** for **expert demonstrations** or human-designed **reward functions**.
In the following part, we address some of the common concerns of the reviewers:
**Quantitative evaluation over generated images**
We thank the reviewers for this suggestion. We have reported the LIPIS distance and L2 distance between the edited images and real goal images in Table 1 and Table 2 of the attachment. We also report several more ablation tasks to make the ablation study more concrete. It's clear from the results that LfVoid is not only superior to the other baselines but also better with all its components combined.
**The ability of LfVoid for general-purpose image editing**
We show that the goal-generation method proposed by LfVoid is not limited to robotics environments and can be used to perform general-purpose editing. We include several general editing results in Figure 4 of the attachment. The results show that LfVoid can perform more **localized** editing as well as **preserve the background** according to only text instructions when compared to existing editing methods. We only choose to study LfVoid’s performance in the robotics context because the ability to perform localized editing with high fidelity to the original image is important when using large-scale text2image models to guide robot learning.
**Prompt Tuning for the editing instructions**
Prompt tuning is very lightweight for LfVoid as we simply visualize the attention maps and see which prompt results in an attention map that better associates each token with its region of interest. We only searched about 5 different prompts for LfVoid, and we also performed the **same** amount and type of prompt tuning for InstructPix2Pix and Imagic. We will include this description in the revised paper.
**Reinforcement Learning with generated visual goal baselines**
At the heart of this work, LfVoid is leveraging knowledge in large-scale diffusion models to generate realistic goals to guide robot learning without any in-domain training. Although the mentioned works [1][2] also use imagined visual goals, these visual goals are generated from a random policy and sampled, and thus cannot be explicitly aligned with human intentions. Furthermore, a human-designed goal image is needed at test time. In comparison, LfVoid allows users to directly describe the goal state through language, and can clearly visualize the goal state to avoid ambiguity.
We thank the reviewers for pointing out this line of work and will add it to the related work section in our final version. We provide an additional comparison with these works and the results can be found in table 5. Since the random exploration policy never reached the desired goal, these works failed to accomplish any of the simulated tasks.
**Additional tasks and results**
Our conducted experiments cover a wide range of manipulation tasks and visual appearances. For example, the Push task and the Wipe task require distinct skills to achieve the goal image. We also include both simulated environments and real robot tasks to demonstrate LfVoid's feasibility.
During the rebuttal, we have added a distractor object to push-sim and introduced three real-world tasks using the UR5 robot arm and reported the goal generation results in Figure 3 in the attachment.
We kindly refer reviewers to the attached PDF for additional figures and tables.
[1] Nair, Ashvin V., et al. “Visual reinforcement learning with imagined goals.” Advances in neural information processing systems 31 (2018).
[2] Nair, Suraj, Silvio Savarese, and Chelsea Finn. “Goal-aware prediction: Learning to model what matters.” International Conference on Machine Learning. PMLR, 2020.
Pdf: /pdf/1d4b1f6a7ae574249e0543536771fbe3c2230a7a.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Coherent Soft Imitation Learning | Accept (spotlight) | Summary: This paper studies the imitation learning (IL) problem. There are two main classes of IL methods: behavioral cloning (BC) and inverse reinforcement learning (IRL), each with its own advantages. This paper proposes an IL method Coherent Soft Imitation Learning (CSIL) which combines BC and IRL. CSIL first learns a reward function named “coherent reward” under which the BC policy is optimal, and then performs soft policy iteration on the learned coherent reward. The authors evaluate CSIL on a wide range of settings and tasks including online and offline IL for high-dimensional and image-based continuous control tasks.
Strengths: 1. The proposed algorithm CSIL is novel according to my knowledge.
2. The authors evaluate CSIL on a wide range of settings and tasks including tabular tasks, online and offline IL from agent demonstrations, IL from human demonstrations from states and images.
Weaknesses: 1. The proposed method CSIL fails to combine the advantages of both BC and IRL, which is the main contribution claimed in this paper. In CSIL, the coherent reward function is learned under the principle that the BC policy is the (soft) optimal policy with that reward function. So the best policy we can expect to obtain from performing soft policy iteration on the learned coherent reward function is the BC policy. Besides, CSIL does not retain the main merit of IRL: addressing the compounding errors issue of BC. It is known that IRL leverages the state-action distribution matching principle to address the compounding errors issue in BC [1, 2, 3]. However, CSIL does not inherit this principle to learn the reward function. Therefore, the advantage of CSIL over BC and IRL is unclear.
2. The theory about the coherent reward (Theorem 1) is not novel. In [4], they have derived the feasible reward formula in the maximum entropy RL setting, which is exactly the case when the prior policy is a uniform policy in Theorem 1 (this choice of prior policy is also used in experiments in this paper). However, the authors do not discuss this existing result.
3. The writing in this paper needs improvement, and some parts are confusing. Even though I have read most of the IL papers in the reference, I still found it difficult to follow this paper. Please refer to the detailed comments in the Question section.
4. In experiments, CSIL performs worse than the existing method DAC on the online IL in MuJoCo tasks. DAC outperforms CSIL in HalfCheetah-v2 and Hopper-v2 while having a competitive performance with CSIL in Ant-v2 and Walker2d-v2. Besides, the offline IL setting considered in this paper is actually the offline IL with a supplementary dataset setting instead of the pure offline IL setting. However, the authors does include the methods in the offline IL with a supplementary dataset setting [5, 6] for comparison.
References:
[1] Seyed Kamyar Seyed Ghasemipour, Richard Zemel, and Shixiang Gu. A divergence minimization perspective on imitation learning methods. In Conference on Robot Learning, 2019.
[2] Nived Rajaraman, Lin Yang, Jiantao Jiao, and Kannan Ramchandran. Toward the fundamental limits of imitation learning. Advances in Neural Information Processing Systems, 2020.
[3] Tian Xu, Ziniu Li, and Yang Yu. Error bounds of imitating policies and environments. Advances in Neural Information Processing Systems, 2020.
[4] Haoyang Cao, Samuel Cohen, and Lukasz Szpruch. Identifiability in inverse reinforcement learning. In Advances in Neural Information Processing Systems, 2021.
[5] Geon-Hyeong Kim, Seokin Seo, Jongmin Lee, Wonseok Jeon, HyeongJoo Hwang, Hongseok Yang, and Kee-Eung Kim. DemoDICE: Offline imitation learning with supplementary imperfect demonstrations. In Proceedings of the 10th International Conference on Learning Representations, 2022.
[6] Haoran Xu, Xianyuan Zhan, Honglei Yin, and Huiling Qin. Discriminator-weighted offline imitation learning from suboptimal demonstrations. In Prooceedings of the 39th International Conference on Machine Learning, 2022.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: 1. Was Figure 1 obtained from experimental results? If so, please provide a brief description of the experimental setup used to obtain the results shown in Figure 1.
2. Algorithm 1 does not provide a clear description of the proposed CSIL algorithm. First, it does not explain how to initialize the shaped critic and only shows a fixed point equation where both sides depend on the critic to be learned. Additionally, Algorithm 1 does not clearly demonstrate how CSIL utilizes online interactions, offline samples, or the dynamic model.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 2 fair
Contribution: 2 fair
Limitations: Please see the detailed reviews in the Weakness part.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We would like to thank the reviewer for their comments and critical feedback.
**Weakness 1.** The reviewer highlights an important aspect of CSIL: If the demonstration data covers the entire state-action space and the BC fit is exact, then indeed CSIL will leverage and match this expert BC policy. But in this case, BC should be all you need, because there should be no compounding errors if the demonstration coverage is ideal.
However, in the case of where the expert demonstrations do not cover the entire state-action space, then there will be regions where the BC policy is undefined and likely ineffective, so we perform RL to improve the policy in these gaps.
This is where CSIL comes in, by using the BC policy to construct the coherent reward and then using this reward to improve using additional environment interactions.
This property is in contrast to BC, which is a non-interactive algorithm and cannot interact with the environment to improve the policy.
We demonstrate this empirically in the tabular inverse optimal control setting (Table 1), where there are no approximations. In the sparse task, the BC policy has high performance, but is brittle to the wind disturbance due to compounding errors when out-of-distribution. CSIL is able to learn out-of-distribution actions that resolve this weakness to the same level as the best IRL baselines. The value functions in Appendix K.1 show that CSIL and GAIL also learn visually similar value functions. This validates our statement above that CSIL improves BC outside of the demonstration distribution.
We are also able to show this in the function approximation setting. For example, in Figures 32 CSIL is able to learn an effective high-dimensional manipulation policy on door-v0 from one demonstration, when its initial BC policy is non-performant. Figure 28 shows that DAC and SAC-from-demonstrations requires 3x the experience to reach similar performance on door-v0 with one demonstration.
This is evidence that CSIL learns an effective shaped reward from the demonstration data.
In summary:
* CSIL improves on BC by being able to use additional interactions with a learned reward.
* CSIL is an inverse RL method as it inverts the RL policy update and learns a shaped reward.
* These properties are validated in our experimental results across a range of settings.
**Weakness 2.** We wish to thank the reviewer for highlighting Theorem 1 and Remark 3 of Cao et al. (2021). We will mention this work alongside our acknowledgement of Jacq et al. in the main text, who also derive a similar maximum entropy policy inversion in 2019 and inspired our approach. We will also replace ‘derive’ with ‘use’ in our first contribution summary (line 36) to emphasize that our contribution of combining the policy inversion result with BC for IRL, rather than the mathematical result, although our relative entropy inversion is also a slightly more general derivation than the maximum entropy one.
**Weakness 3.** We have restructured Section 3 to be easier to follow (see our main rebuttal response). If you have specific feedback for which passages were confusing or hard to follow, that would be appreciated and we would be happy to incorporate that feedback.
**Weakness 4.** In the paper we tried to show the performance of CSIL in many diverse domains, even when baselines may be stronger.
We believe that it is unrealistic to expect a new approach to beat all existing baselines on all tasks.
While DAC is indeed better in some of the online MuJoCo locomotion tasks, CSIL still solves these tasks, whereas DAC performs less well on the more complex Adroit and Robomimic tasks that CSIL can solve. Moreover, our implementation of DAC was based on the extensive hyperparameter and regularizer investigation of Orsini et al. [11], which makes it a strong baseline.
We agree with the reviewer's suggestion for explicit offline imitation baselines for the offline locomotion tasks.
We have run DemoDICE [1] and SMODICE [2] in our offline setting.
In the interest of time, we used the author’s original open-source implementations of these algorithms rather than reimplement them in Jax and Acme in our codebase.
We discuss these algorithms and their performance in our main rebuttal comment and include the experimental results in the attached PDF.
**Q1.** Figure 1 is a toy contextual bandit problem used for illustration purposes. The generating script is 'pull_figure.py' in the provided code in the supplementary material.
The classifier uses the same spectral-normalized network used in Acme's DAC implementation. The CSIL reward was computed using a non-parametric Gaussian process with a stationary squared exponential kernel. The aim of this figure was to illustrate why our log ratio using regression with a stationary process might be preferable to a classification approach. We have added more context to the figure caption.
**Q2.** Algorithm 1 was written as a general summary of both the tabular and deep implementation of CSIL, which is why it lacks specificity.
In the tabular case, the critic is initialized with the log policy ratio, exactly like the coherent reward.
In the continuous case, the critic is pre-trained using SARSA and the squared Bellman error.
The soft policy iteration (SPI) loop at the end of the algorithm could be any oracle-based, model-free or model-based SPI routine that uses the coherent reward.
For the deep learning implementation, we use SAC.
We have updated Algorithm 1 to improve clarity and added a second algorithm to the Appendix that specifically describes the SAC-based online deep RL implementation.
We hope this rebuttal addresses your concerns. Please let us know if you have remaining concerns or further questions.
[1] DemoDICE: Offline Imitation Learning with Supplementary Imperfect Demonstrations, Kim et al., ICLR 2022
[2] Versatile Offline Imitation from Observations and Examples via Regularized State-Occupancy Matching, Ma et al., ICML 2022
---
Rebuttal Comment 1.1:
Title: Thanks for authors' response.
Comment: Thanks for authors’ detailed response. However, my major concern in Weakness 1 is not well addressed. The authors argue that when the demonstrations do not cover the entire state-action space, CSIL performs RL to improve the BC policy by using the BC policy to construct the coherent reward and then using this reward to improve using additional interactions. However, there remains a key question that **whether performing RL with the coherent reward function can improve the BC policy or not**. I think the answer is no. This is because **the coherent reward function is learned under the principle that the BC policy is the optimal policy. Therefore, the best policy we can expect to obtain by performing RL with the coherent reward is the BC policy**. So I think that CSIL cannot improve on BC in theory. This also shows the key difference between CSIL and IRL methods in learning reward functions: CSIL learns a reward function such that the BC policy is the optimal policy while IRL learns a reward function under the distribution matching principle. So CSIL does not retain the merit of IRL.
---
Reply to Comment 1.1.1:
Title: BC improvement [1/2]
Comment: We thank the reviewer for engaging with the rebuttal and following up with their concerns.
The reviewer asked
'How does performing RL with the coherent reward function improve the BC policy?'.
On a high-level, the answer is
* The KL-regularization temperature changes between the coherent reward definition and RL finetuning. This adjustment allows the policy to deviate slightly from the BC policy during RL finetuning and therefore allows improvement.
If the temperature does not change, the policy does not improve on the BC policy
* The policy is optimized with a $Q$ function.
The $Q$ function is not the same as the coherent reward, because it combines dynamic programming and the coherent reward.
The reward enables overcoming the compounding error problem by using policy evaluation
CSIL's ability to improve the policy out-of-distribution with the coherent reward is one of the more subtle aspects of the method, so we provide more in-depth explanations with some examples or derivations.
**The shaping view.** The policy is not optimized with the coherent reward greedily, but rather using a $Q$ function fitted using Bellman’s equation. The definition of the coherent reward means that it should ideally be positive in-distribution $(s,a \in \mathcal{D})$, negative for incorrect actions ($s \in \mathcal{D}, a\notin\mathcal{D}$) and zero outside of the demonstration state distribution ($s \notin\mathcal{D}$), as mentioned in lines 169 - 172.
This shaping means that the $Q$ function defined using the coherent reward assigns higher value to actions that keep or return the agent to the demonstration distribution.
This credit assignment applies to states out-of-distribution as well as in-distribution.
This $Q$ function enables the policy to learn to overcome the compounding error problem typical in BC.
Also, remember that the temperature is reduced for RL ($\beta<\alpha$), so the KL-regularization lets the policy improve on the BC policy by regularizing against the prior less. If $\beta = \alpha$ then the policy won't improve on the BC policy, which happens because the KL divergence regularization cancels out the coherent reward in the soft Bellman equation.
*Example.*
Consider the tabular setting where the BC policy fits a deterministic expert perfectly for all states in the demonstration data, and matches the prior otherwise.
The prior is uniform, where $p(a|s) = 1/|\mathcal{A}|\forall s\in\mathcal{S}$.
This setting means that the coherent reward has the following values:
* For $s,a \in \mathcal{D}$, $r(s,a) = \alpha(\log 1 - \log1/|\mathcal{A}|)=\alpha\log|\mathcal{A}|$
* For $s \in \mathcal{D}, a\notin\mathcal{D}$, $r(s,a) =\alpha( \log 0 - \log1/|\mathcal{A}|)= -\infty$
* For $s\notin \mathcal{D},\forall a\in\mathcal{A}$, $r(s,a) = \alpha( \log1/|\mathcal{A}|-\log1/|\mathcal{A}|)=0$
This reward results in the following returns for discount factor $0 < \gamma \leq 1$
* A trajectory that stays in-distribution has return $\frac{1}{1-\gamma}\alpha\log|\mathcal{A}|$
* A trajectory that leaves the demonstration distribution (due to compounding errors) at time $t_1$ has return $\frac{1-\gamma^{t_1}}{1-\gamma}\alpha\log|\mathcal{A}|$
* A trajectory that leaves the demonstration distribution (due to compounding errors) at time $t_1$, but manages to recover at time $t_2$ has return
$(\frac{1}{1-\gamma} - \frac{1 - \gamma^{t_2}}{1-\gamma} + \frac{1-\gamma^{t_1}}{1-\gamma})\alpha\log|\mathcal{A}|$
Therefore trajectories that stay in-distribution longer and recover faster have higher returns.
These trajectory returns mean the agent is encouraged to learn to overcome compounding errors and stay in-distribution.
(continued in the next comment) | Summary: This study seeks to leverage the advantages of Behavioral Cloning (BC) and Inverse Reinforcement Learning (IRL) to develop a sample-efficient imitation learning algorithm. However, the integration of these two approaches is not straightforward, as optimizing the policy using a dynamic reward diminishes the benefits of BC pre-training. To address this challenge, this work derives a shaped reward based on pre-trained BC according to the entropy-regularized policy update. This shaped reward enables policy refinement without compromising the advantages gained from BC, as it remains consistent with BC. Experimental results demonstrate that the proposed Coherent Soft Imitation Learning (CSIL) effectively addresses both online and offline imitation learning tasks in high-dimensional continuous control and image-based scenarios.
Strengths: * The experiments are extensive and comprehensive.
* The proposed algorithm is able to solve complex control tasks using only a few demonstrations.
* The novelty is commendable.
Weaknesses: * The performance is influenced a lot by the efficacy of BC.
* The algorithm requires some tricks to work well.
* The readability is not good.
Technical Quality: 4 excellent
Clarity: 2 fair
Questions for Authors: * How is p(a|s) in Eq 2 derived? It doesn’t seem to appear in [1].
* How is p(a|s) in Eq 4 derived? It doesn’t seem to appear in [2].
* Definition 1 appears to be somewhat redundant, given that the information is already provided in Eq 4 and KL-regularized RL objective in the Background section.
* What does "classifier" refer to in Fig 1? Does it correspond to the discriminator in GAIL?
* In Fig 1, it appears that the color intensity of Ours does not necessarily indicate a greater distance from the data. Does this imply that the agent may be unable to return to expert support in certain regions?
* Algorithm 1 is vague, for example:
* Where is the fine tuning part using additional data(online/offline).
* The shape coherent reward uses $\theta$ but there is no $\theta$ at the right hand side of $=$.
* What’s the exact objective/equation for computing $\tilde{Q}_n$.
* It would be better to include the tricks mentioned in Section 4 into Algorithm 1.
* The placement of “|” in Lemma 2 seems to be wrong: $p(Y, w| X)$ -> $p(Y | w, X)$
* What is the influence of the reference policy ($p(a | s)$ or $q_{\theta_1}(a | s)$)?
* What would happen if we only use Eq 10 or its improved version to train without BC pre-training?
* In Fig 27, BC’s performance is better than the initial performance of CSIL, which is different from Fig 31, what’s the reason for this?
* In Fig 31, there are some tasks where BC’s performance is better than CSIL, e.g HalfCheetah-v2(n=30) and Hopper-v2(n=3). This seems to contradict the feature of CSIL, what’s the reason for this?
* The scores of the experts are somewhat low at HalfCheetah-v2(8770) and Hopper-v2(2798), it would be better to train on better experts, e.g HalfCheetah-v2($\ge$12000) and Hopper-v2($\ge$3500).
* In Fig 3, why does CSIL exhibit a deteriorating performance in Hopper-v2 as the number of demonstrations increases?
* Some typos:
* Fig 28, again -> against
* Fig 36, s hows -> shows
* Line 984, and and -> and
1: Brian D Ziebart. Modeling purposeful adaptive behavior with the principle of maximum causal entropy. Carnegie Mellon University, 2010.
2: Tuomas Haarnoja, Aurick Zhou, Pieter Abbeel, and Sergey Levine. Soft actor-critic: Off- policy maximum entropy deep reinforcement learning with a stochastic actor. In International Conference on Machine Learning, 2018.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 2 fair
Contribution: 3 good
Limitations: As mentioned in the paper, the performance of CSIL is constrained by the capabilities of BC. If BC fails to successfully address the task with an adequate number of demonstrations, CSIL will encounter similar limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their comprehensive review and feedback.
We were able to address all the issues and address the specific questions below.
**Policy prior $p(a|s)$.** Maximum entropy RL is a specific case of KL-regularized RL when $p(a|s)$ is a uniform distribution. We mention this in the passage in lines 72 - 78.
**Definition 1.** The inclusion of pseudo-posteriors is to emphasize the connection between KL-regularized optimization and Bayesian inference methods, which was crucial in this work due to the necessity of a stationary posterior policy given the uniform action prior. Repeating similar mathematical results is somewhat unfortunate, but is used to provide a formal definition while also showing how the notion of a pseudo-posterior is general to any KL-regularized optimization problem beyond the RL discussion in Section 2, such as regression task that BC performs.
**Figure 1.** The goal of Figure 1 was to illustrate, on a low-dimensional contextual bandit problem with one state and one action, why the coherent reward with a stationary prior might be preferable to the classifier-based reward used by methods such as GAIL. Because it's a contextual bandit problem, there is no dynamical system.
The agent will sample actions from the yellower regions.
We will make this clearer in the caption.
The coherent reward is constructed using a stationary nonparametric Gaussian process, so the color depends on the data points and the GP kernel.
The main takeaway we wanted to communicate is that the coherent reward captures the shape of the true reward faithfully by using regression and the log ratio.
**Reference policy.** This design decision is expanded in ‘Using the cloned policy as prior.’.
If the prior $p(a|s)$ is the uniform distribution, then using it results in the maximum entropy soft Bellman equation.
If instead the BC policy is used, like in the continuous setting, the soft Bellman equation and policy objective are now regularized explicitly against the initial BC policy.
While this switch changes the policy update from which the coherent reward was derived, since the BC policy should match the prior outside of the demonstration distribution due to the stationary property, and we don't desire the policy to change significantly in-distribution, this switch does not change the objective too much and adds some very useful regularization.
**BC vs CSIL performance.**
The BC policy used by CSIL is trained in a simpler way than the BC baseline.
For the BC baseline, we use a training protocol which learns for many iterations (e.g. 1M) and uses policy evaluation during training to pick the best performing policy, to mitigate the effects of under- and overfitting.
For CSIL, we thought this protocol was too elaborate, so we instead use fewer iterations and early stopping to obtain a reasonable initial BC policy, which simplifies the algorithm and implementation.
The BC baseline in Fig, 27 uses the training protocol described above, whereas the Fig. 31 ablation compares the initial CSIL BC policy.
**Initial drop in CSIL performance.** We agree the initial performance below BC (CSIL) in Fig. 31 is unexpected.
This happens rarely and the policy surpasses BC quickly.
The best explanation we can give for the initial performance difference is that the initial $Q$ function may not be fully 'coherent' after SARSA pretraining, so the initial policy may 'unlearn' briefly during the initial updates.
This is a consequence of having to use black-box function approximation for the critic.
**Online Hopper-v2 performance.** Looking at Figure 27 and 31, it seems that CSIL on Hopper has the highest performance variance across the gym tasks, and is also quite high variance in the initial BC (CSIL) performance.
This suggests that the Hopper data is harder to learn effective coherent rewards from; in other words, that it struggles to differentiate expert and non-expert actions and do good credit assignment.
Also note that for Figure 3 we are pessimistic and use the best 25th percentile return, to show a best worst-case performance. In Figure 3, the selected Hopper-v2 performance is right at the end of training where both median performance and the interquartile range happen to drop. If you instead compare the step-based curves in Figure 27, the performance actually looks similar across the different dataset sizes but is quite high variance.
**Expert performance.** We built on the datasets shared from peer-reviewed prior work (Orsini et al. [11]), who also used Acme and open-sourced their expert demonstrations for researchers to build off. We cannot comment on the performance of these demonstrations.
**Lemma 2.** We believe we have written the result down correctly in Lemma 2. The DPI arises from manipulating a joint probability distribution and applying Jensen’s inequality, and in our case we use $p(y, w | x) = p(y|w, x)p(w)$ as our joint of interest.
**Algorithm 1.** We wanted to keep Algorithm 1 high level so it captures a general description of CSIL within the available space.
We have now added an additional algorithm in the Appendix that describes the SAC-based implementation and its additional tricks and highlighted it in the main text at the end of Section 4.
To answer your questions:
* The soft policy iteration loop at the end of the algorithm performs RL
* We have fixed the math typo in the reward
* $Q_n$ is updated using the soft Bellman equation in Equation 3. For SAC, learning uses target networks and the squared Bellman error.
**CSIL without BC.** This is an interesting question. In theory (and the tabular case) it should work.
We ran an ablation (BC iterations = 0) on the Gym tasks and added it to the rebuttal PDF.
CSIL without BC it doesn't seem to work that well on Gym, probably because the reward learning is only tuned for refinement and the 'faithful' heteroscedastic regression fix is not done in the refinement loss.
Please let us know if you have additional questions.
---
Rebuttal Comment 1.1:
Comment: I would like to thank the authors for their replies. I was convinced. I have revised my score.
---
Reply to Comment 1.1.1:
Comment: Thanks. We are glad to hear your concerns have been resolved and you are happier with the work. | Summary: The authors propose a hybrid BC and IRL method, which uses a maximum entropy KL-regularized BC policy to define a shaped reward which can be optimized with IRL. The authors derive a “coherent” reward which is defined as a reward for which the BC policy is optimal, by inverting the soft policy iteration update. The full method first trains a BC policy, defines the coherent reward based on the policy, then uses soft policy iteration to optimize this reward. The authors also provide a version of the method for continuous control which requires several additional components. They compare against other IRL algorithms and BC in a tabular and continuous environments and in online and offline RL settings. Overall, the method performs well in continuous environments given enough demonstrations and performs comparably with other methods elsewhere.
Strengths: - The authors apply a novel perspective of coherence to formulating a hybrid BC and IRL method.
- The proposed method is well-derived with theoretical backing.
- Extensive connections to prior works throughout.
- Experiments in multiple different settings and control tasks.
Weaknesses: - While the tabular form of the method is simple and elegant, the continuous version is quite complex and requires many additional components to work. This includes training the critic with an additional auxiliary loss, using heteroscedastic regression to fit the expert data, and finetuning the coherent reward with a new objective. In comparison, most other IL methods can be applied out-of-the-box for both discrete and continuous tasks. Given its complexity, the proposed method could be difficult to reproduce and tune in new environments. The authors do demonstrate the continuous method on multiple tasks in different domains and include multiple ablations. However, many details and all ablations are not included in the main paper. It would be helpful to at least summarize the results to address the method complexity.
- The derivation of the method can be difficult to follow and seems overly complicated in places. For example, my understanding is that the initial policy is trained with KL-regularized BC but the authors use pseudo-likelihoods and stochastic process regression to derive it which seem like interesting connections but not necessary for the derivation. Providing a simpler derivation or some intuition to guide readers could help make this part clearer.
Overall, these issues are minor, could be addressed with a little re-writing, and do not outweigh the main strengths of the paper. The authors succeed in deriving and empirically verifying a novel hybrid BC and IRL method with the idea of coherence. While additional analysis on the robustness of the method and a simplified derivation could help, these are not major concerns that detract from the paper's main contributions.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: - Is $r_\theta$ updated in each policy iteration with the new policy, $q_{\theta_i}$? If so, does this change the coherence property?
- For continuous control policies, you state that you use the cloned policy as the prior in forming the reward. Wouldn’t this lead to zero rewards everywhere because $r(s,a) = \alpha (\log q_\theta(s,a) - \log q_\theta(s,a))$?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Yes, the note that the BC policy is not always viable as a coherent reward, especially if the task is not solvable with BC even given many demonstrations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We wish to thank the reviewer for their comments and kind words about the submission.
We have revised Section 3 to make the derivation easier to follow and emphasize the key technical points and motivations of the KL-regularized BC.
We discuss more details of this text revision in our main rebuttal comment.
We have also added more references to the relevant ablation studies in the main text.
We now address specific concerns:
**Implementation complexity.** In our experience, a deep imitation learning algorithm requires careful implementation details in practice. For example, GAIL / DAC implementations have additional regularization of the discriminator network using regularizers such as spectral norm or gradient penalties.
IQLearn and PPIL used slightly different objectives (tuned per environment) to add additional regularization.
We wanted to be explicit with the implementation details in the main paper for full transparency and to aid the ease of reproduction, as these implementation details are often hidden in the appendix or open-sourced implementation.
We believe our extensive ablation experiments also help explain why each aspect is needed.
**Updating reward parameters.** In the tabular setting, $r_{\theta}$ is fixed. In the function approximation, $r_\theta$ is refined due to non-stationary approximation error in the policy.
This refinement is done to correct any stationary process approximation error out-of-distribution (i.e. where the BC policy doesn't match the prior), rather than alter the BC fit, so the coherency motivation is not violated.
See Figure 9 in the Appendix to see a 1D example of how the refinement improves the stationary approximation of the policy.
**Coherent reward prior.** We apologize that the ‘Using the cloned policy as prior.’ section is unclear.
A uniform policy prior is used for the coherent reward in both the tabular and continuous implementations.
The BC policy is not used as the prior in the coherent reward for the reason you correctly identify.
For the deep imitation learning implementation, the BC policy is used as the prior in the KL-regularized Bellman equation (Equation 3) and policy objective to regularize the policy optimization. This is to encourage the policy to stay close to the BC policy, and has been used previously in online and offline RL to stabilize the policy updates.
We have updated the text in this section so this passage is more clear.
While this switch changes the policy update from which the coherent reward was derived, since the BC policy should match the prior outside of the demonstration distribution due to the stationary property, and we don't desire the policy to change significantly in-distribution, this switch does not change the objective substantially and adds beneficial policy regularization.
We hope this answers your questions and addresses your concerns. Please let us know if you have additional issues.
---
Rebuttal Comment 1.1:
Comment: Thank you for your response and clarifications. This addresses all my concerns and questions.
---
Reply to Comment 1.1.1:
Comment: Thank you for getting back to us. We are glad to hear your concerns have been addressed. | Summary: The paper proposes an approach to inverse RL to fine-tune a behavior cloning policy using RL on online of offline data sources. Adopting the KL-regularized view to RL/IRL, CSIL expresses the reward in terms of the behavior cloned policy and connects the result with the well-known reward shaping results. This "coherent" reward is thereby used to improve the policy, resulting in a simple and performant inverse RL algorithm, compared to game-theoretic/adversarial approaches.
Strengths: The paper provides a fairly comprehensive overview and background of inverse RL. The construction of the coherent reward using the entropy-regularized RL framework is interesting, and results in a simple algorithm with soft policy iteration. The experimental results are quite convincing and informative, and sufficiently compared against competitive methods.
Weaknesses: In general, the presentation is a bit more jargon heavy than it needs to be, the writing can be greatly simplified to communicate key ideas more clearly, and make them more amenable for uptake.
The reward refinement seems to invoke a minimax optimization procedure. The paper claims to bypass adversarial IRL methods, but the reward refinement procedure seems to contradict that. Can you clarify the questions below?
It would be good to clarify where the uniform prior is used and where the reference policy is initialized to the BC cloned policy. More details for how the critic is initialized and trained would help address this question.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: How important is reward refinement? Do the final experiments use reward refinement? If so, can you report performance with and without reward refinement.
The paper mentions the issue of reward/critic causing the unlearning of the initialization. How does CSIL avoid this issue?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The paper provides a reasonable discussion of limitations, particularly how the success of CSIL can be sensitive to the success of the initial BC policy.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your review. We have revised Section 3 to improve clarity and minimize jargon, and revised the end of Section 4 to add more technical details regarding the reference policies and critic pretraining. To answer the questions here: In the coherent reward, the prior policy is always uniform. For deep imitation learning, in the Bellman equation and policy update objective, we replace the prior policy with the BC policy, for extra regularization. Regarding critic pretraining, in the tabular case we use the log policy ratio like the reward. For the deep imitation learning case, we train an MLP with SARSA and the squared Bellman error objective.
We now answer the remaining questions:
**Reward refinement.** Reward refinement is used in the deep imitation learning implementation. It is ablated in Appendix L.6, Figure 44 of the submission. The results show that it is mainly needed when there are few demonstrations (e.g. 1), and it also helps stabilize convergence in some cases. The reason why it is needed is to stop the RL agent from exploiting errors in the stationary approximation of the policy, which results in overestimation of the coherent reward being larger than desired. This issue is exacerbated in the single demonstration case since the initial BC is less defined and sub-optimal, and the coherent reward is defined in a smaller region of the state-action space, compared to when more demonstrations are available.
Our claim that CSIL bypasses the minimax optimization of adversarial IRL is based on three factors:
1. The exact algorithm (e.g. in the tabular setting) does not require reward refinement.
2. The deep learning implementation can still work without reward refinement in some cases.
3. We use reward refinement to reduce approximation error rather than solve the IRL problem. With a better approximation of a stationary process (e.g. using a non-parametric implementation with a stationary kernel), the refinement step becomes less necessary.
**Policy unlearning.** A randomly-initialized reward and critic network (used in baseline methods such as DAC and PPIL) can have arbitrary optimal actions for a given state. As a result, training a BC-initialized policy with this critic leads to ‘unlearning’ when the optimal actions of the BC policy differ from the arbitrary optimal actions of the randomly-initialized critic. In contrast, for our coherent reward, the optimal action is the maximum likelihood action of the BC policy, which after behavioral cloning should be equal or close to the expert action in the demonstration data. SARSA pretraining of the critic transfers these optimal actions from the reward to the critic. As a result, training the BC policy with this critic should not change the policy’s actions in-distribution since the BC policy, reward and critic are all ‘coherent’ w.r.t. the expert demonstrations.
Outside of the expert data distribution, the optimal action is unknown and the coherent reward should be approximately zero. However, the learned Q function will encourage actions that return to the demonstration distribution since the coherent reward is higher in this region.
We hope this addresses your questions and concerns. Please let us know if you have follow-up questions.
---
Rebuttal Comment 1.1:
Title: Thanks for the clarifications!
Comment: I have read the rebuttal and the response clarifies some of my concerns. I have revised my score accordingly. Great work!
---
Reply to Comment 1.1.1:
Comment: Thank you for replying to our rebuttal and the kind words. We are pleased to hear we have addressed your concerns. | Rebuttal 1:
Rebuttal: We wish to thank all the reviewers for their comments and feedback. We believe we have been able to address all issues.
To summarize the four reviews:
**General Positives**
* CSIL is novel and interesting [tsdg, qely, nmwn, ba8w]
* The proposed method is sound theoretically [qely]
* Extensive connections to prior works [tsdg, qely]
* Comprehensive experiments, baselines, settings and / or ablations [tsdg, qely, nmwn, ba8w]
**General Weaknesses**
1. Clarity of Algorithm 1, Section 3 and the implementation details [tsdg, qely, nmwn, ba8w]
2. Missing an offline imitation learning baselines [ba8w]
3. Missing citation of Cao et al.’s similar policy inversion [ba8w]
**Our improvements**
1. Added missing technical details and improved clarity throughout the text, including Algorithm 1
1. Restructured Section 3 to start with the policy inversion and then use the coherent reward to motivate pseudo-posteriors and regularized BC
2. Added the coherent reward in an explicit and expanded definition block, and used an enumerated list to be more explicit about the regularized BC motivating points and details
3. Added a second algorithm to the Appendix that describes the SAC-based implementation details and includes the additional implementation details from Section 4
2. We have run two additional offline imitation learning baselines: DemoDICE [1] and SmoDICE [2] and added them to Figure 5 and 29 (see PDF)
3. Added discussion of Cao et al. w.r.t. our Theorem 1
While we have restructured and rewritten parts of Section 3, we ensured the underlying topics and details covered content remains unchanged.
We are also preparing to open-source the code, so we can release our implementation if the submission is accepted.
The attached PDF contains the offline results with DemoDICE and SMODICE baselines, and an additional ablation study requested by nMWn that runs CSIL without BC pretraining.
We hope this addresses the reviewers concerns. Please let us know if there are remaining issues.
**A note on the new SMODICE and DemoDICE baselines.**
In the interest of time, we ran the author’s released code rather than implement the method in Jax and Acme like the other baselines.
The original papers used the D4RL datasets (expert and random) and hundreds (100-200) of demonstrations.
We ran the code for ten seeds with our offline setting (expert and full-replay datasets, with 1, 3, 10, 30 demonstrations).
The main difference is the use of the Orsini et al. expert datasets in our implementation and the use of the D4RL expert datasets in DemoDICE and SMODICE.
However, we have normalized to [0, 1] based on the expert performance in each algorithm’s corresponding expert dataset.
The interesting design decisions between SMODICE and DemoDICE are a discriminator-based reward, state-based value function and weighted BC-based policy update. These aspects separate them from the current offline baselines. The last two aspects are attractive for offline learning, as they mitigate the issue of estimating high value in unobserved actions.
SMODICE and DemoDICE are very similar methods.
The biggest difference between the methods is that DemoDICE learns a state-action reward, while SMODICE learns a state-based reward.
Our experiments show these are both two very strong baselines, especially for the Ant environment.
In particular, they also perform well in the one-demonstration setting.
However, CSIL maintains competitive performance on HalfCheetah-v2, Hopper-v2 and Walker-v2 for 10 and 30 demonstrations.
We would also like to note that some implementation details, such as weighted BC-based policy update, could be incorporated into CSIL, since it is agnostic to the SPI implementation (e.g. use MPO rather than SAC). The primary goal with our offline experiment was to compare CSIL against its similar baselines IQLearn and PPIL, since they share many implementation details.
[1] Demodice: Offline imitation learning with supplementary imperfect demonstrations, Kim et al., ICLR 2022
[2] Versatile offline imitation from observations and examples via regularized state-occupancy matching, Ma et al., ICML 2022
Pdf: /pdf/5ea85af41f8b5d9c85eabce68d9a6e04b882c271.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
SAME: Uncovering GNN Black Box with Structure-aware Shapley-based Multipiece Explanations | Accept (poster) | Summary: A GNN-based Shapley value for GNN explanation is proposed, which is a novel Structure-Aware Shapley-based Multi-Explanation (SAME) technique for fairly considering the multi-level structure-aware feature interactions over an input graph It is computed by an expansion-based Monte Carlo tree search (MCTS) and its main advantage over existing methods is its theoretical foundation.
Experiments on multiple datasets also highlight practical benefits.
Strengths: - **Clarity:**
Table 1 clearly points out the differences/additional features in comparison to related methods. The figures have been designed with care and are visually appealing. In particular, the appendix contains some beautiful figures. However, some arguments could be made more clear (see weaknesses). Some larger parts of the paper are not well understandable because information is missing (for details see points of minor critique).
- **Originality:**
Table 1 explains the novelty of SAME relative to competing methods. While all of the listed features are realised by some methods, SAME is able to offer their combination.
The authors claim that another distinguishing feature is a theoretical foundation. (However, other Shapley based measures are also benefit from a well established related theory.)
The proposed method SAME is quite similar to SubgraphX. Primarily, the pruning-based algorithm is replaced by an expansion-based algorithm. Thus, the main idea and game theoretic framework are not novel.
- **Quality:**
The figures are well designed and the text is well structured. The experiments seem standard. The main algorithmic contributions are difficult to understand, however.
- **Significance:**
Experiments suggest a clear improvement over competing methods on multiple datasets.
Only some results on synthetic graphs in the appendix are not competitive.
Weaknesses: - The biggest issue is that considerable parts of the paper are not well understandable. The algorithm and the contribution of its parts are not clear. Neither the game theoretic approach is explained. It would also be important to discuss the detailed differences between SAME and the similar approach SubgraphX.
- The Shapley value is only computed approximately (as common).
- The search space cannot be covered completely. Thus, the algorithm has therefore no guarantees to find a global optimum (as common).
- The implications of the Theorem 1 need to be explained better. Currently, it does not seem to provide clear practical value, since the fact that one bound might be better than another one does not imply that the corresponding loss is also smaller..
- Missing literature?
GraphSVX: Shapley Value Explanations for Graph Neural Networks. ECML PKDD 2021.
- **Reproducibility:** No code has been submitted.
**Points of minor critique:**
- The rules of the underlying multiplayer game are not explained.
- Lines 110-121, which are supposed to explain the main contribution (the algorithm that explores the search space), are not really understandable. At least a correction of the grammar would be helpful.
- Lines 130-134: It is impossible to follow the argument why the proposed algorithm requires fewer smoothness constraints on the function $f$, since the pruning algorithm is not introduced before.
It might help to move the discussion of related work to the beginning of the paper.
- Figure 1 is not sufficient to explain the overall approach. The main components are not explained in the text.
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: For what kind of GNNs does the proposed explainer work? Could attention be considered?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 3 good
Limitations: The limitations have been pointed out under weaknesses and were mostly discussed by the authors. I do not foresee a negative societal impact of this work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your careful review and comments. Below please find our point-to-point responses to your comments.
**Q1. The algorithm and the contribution of its parts are not clear. Neither the game theoretic approach is explained. It would also be important to discuss the detailed differences between SAME and the similar approach SubgraphX.**
1. Detailed algorithm of SAME: Please kindly find the detailed algorithm of our method provided in **Algorithm 1-4** in the Supplementary Material.
2. Contribution: In short, the key contribution of our work includes (1) *Theoretical aspect:* i) We summarize and refine the characteristics of previous work and formally provide several desired properties (see Table 1 in the main manuscript) that can be considered by explanation methods for GNNs. ii) We provide the loss bound of the MCTS-based explanation techniques and further prove the superiority of our expansion-based MCTS in SAME compared to the previous work. (2) *Empirical aspect:* Our experiments cover both real-world and synthetic datasets. The results reveal that i) SAME consistently and significantly outperforms previous SOTA with different metrics under the same condition. ii) SAME qualitatively achieves a more human-intuitive explanation compared to previous methods across multiple datasets.
Our work is related to the SubgraphX, but it is different from it because our approach considers multiple explanations, redundancy in explanation, and node-wise importance of the given graph, simultaneously (see Table 1 in the main manuscript). Our approach is based on MCTS, but with a novel two-phase expansion-based MCTS version, which is shown to outperform the subgraphX, and outperform or be competitive with other SOTA methods in both inference time and explainability (see Table 2 and Table 3 in the main manuscript; Table S3 and Table S4 in the supplementary material; *Overall response: Q4* for additional experiments for detailed results).
3. Game theoretic approach: We have revised the Preliminaries in Section 2 to make it more clear. In short, finding the explanation for a given graph $G$ by using an importance scoring function $I(f(\cdot),G_{ex},G)$ can be formalized as:
$$G_{ex}^* = \mathop{\arg\max}\limits_{G_{ex} \subseteq G} \ I(f(\cdot),G_{ex},G)$$
where each explanation $G_{ex}^i$ has $n_i$ nodes, and the other nodes not in the explanation can be expressed as $\{G \backslash G_{ex}^i\}=\{v_{j}\}_{j=n_{i}+1}^n$. Shapley value, as a concept originating from cooperative game theory, is used as an importance scoring function in this work. Therefore, the nodes or substructures or explanations (each explanation contains one or more substructures) can be regarded as a *player* in the game. The payoffs or utility function in the game is the Shapley value. The goal of our method is to find an explanation that can maximize the Shapley value.
**Q2. The Shapley value is only computed approximately (as common).**
Indeed, accurately obtaining the Shapley value has always been a very difficult problem [1]. In order to approach the exact solution of the Shapley value to a certain extent, we adopted the k-hop Shapley (see eqn. 3 in the main manuscript).
**Q3. The search space cannot be covered completely.**
Given more time, MCTS can explore the search space more thoroughly and return better results. We acknowledge that our approach is not guaranteed to find the global optimum when the given running time is short. However, as MCTS is an "anytime" algorithm, the longer it runs, the better the explanation result tends to be.
**Q4. The implications of the Theorem 1 need to be explained better.**
Thank you for your advice. We have restructured the article to emphasize the theoretical framework as the main contribution in this part of our study. In particular, we have broadened the scope of our theoretical exploration in juxtaposition with subgraphX and moved the toy examples from the appendix to the main body of the text. We hope this modification emphasizes the importance of this framework in specific application settings and enhances our exposition of theoretical optimality. It is worth noting that we are not comparing different bounds. For a given graph, its optimal interpretation should be the same under our mathematical framework. What we want to emphasize is that the expansion based algorithm can be closer to the optimal explanation than the pruning based algorithm. Please kindly find the detail in *Overall response: Q5*.
**Q5. Missing literature: GraphSVX.**
Thank you for sharing this related work; we will add a discussion of them to the related work section in final the version. In short, although GraphSVX and our work both use Shapley value as a value function, the technical aspect of our method is orthogonal to GraphSVX: SAME is a perturbation-based method while GraphSVX is a surrogate method.
**Q6. No code.**
We have released the code and sent an anonymized link to the AC in a separate comment.
**Q7. Minor critique to Section 2.2 Line 110-121 and 130-134.**
We apologize for confusing you with the content in Section 2.2. We have revised this section.
We have modified the overall structure of Section 2.2: firstly, we introduce our mathematical framework; then, under this framework, we emphasize the difference between the pruning algorithm and the expansion-based algorithm, thus making our results clearer. Please kindly find the revised content in *Overall response: Q5*.
**Q8. For what kind of GNNs does the proposed explainer work? Could attention be considered?**
As the SAME is a *model-agnostic* technique, our approach does not need further adaptation or modification for application to different GNN models. Please kindly find the related experiments in *Overall response: Q3*.
**Reference**
[1] The shapley value in machine learning. 2022.
---
Rebuttal Comment 1.1:
Title: Score update
Comment: I thank the authors for their response and additional explanations. I have updated my score accordingly. | Summary: The paper introduces a novel method for explaining GNNs called SAME. SAME is theoretically grounded and has some good properties over exisiting methods. SAME uses an expansion-based Monte Carlo tree search algorithm to approximate the optimal Shapley-based explanation, which is proven to be better than pruning-based methods like SubgraphX. SAME has two MCTS stages, first on nodes and then on connected component combinations. SAME is evaluated on six datasets from different sources with diverse categories, including molecular graphs, sentiment graphs, and synthetic BA graphs. The experiments show that SAME outperforms previous state-of-the-art methods and has reasonable inference time.
Strengths: 1. A theoretical assessment of the expansion-based vs. pruning-based explanation methods is great.
2. The two-stage MCTS approach involving nodes and then components is novel to me and intuitively makes sense.
3. The proposed SAME framework has good empirical performance.
Weaknesses: 1. The level of detail for different parts of the paper can be adjusted
One suggestion I have is to shrink section 2.1 by moving some content to the Appendix. Although being clear about what Shapley value is and what the good properties of Shapley value have is important, most of the target readers should be comfortable reading the paper without mentioning all the details explicitly in the main text. For example, properties 1 - 4 should easily follow as long as the set of players P_i is clearly defined. Also, "k-hop Shapley" is another fancy term for saying only nodes in the computation graph should be considered. This idea was introduced in SubgraphX, and the idea of computation graph was brought up even earlier in GNNExplainer. All of these being said, the core idea of Section 2.1 can be quickly made clear by defining the set of players and the computation graph as k-hop neighbors. However, a whole page listing out all four properties and formulas (2) and (3) can make the paper seem redundant and suspicious, as these are not original contributions of the paper.
I would suggest spending more space discussing other important things. For example, at the end of Section 2, the authors mentioned that "It is intuitive to conclude ....". This kind of discussion can be made more detailed when there is more space.
2. Required properties of explanation methods can be made more clear
Several properties of GNN explanation methods are introduced in Table 1, and the major role of sections 2 and 3 is to show SAME is better than other methods in terms of these properties. However, I think some of these properties are not well-defined or not reasonably named, making them not the best ways to measure explanation methods.
a. Structure awareness. The authors claim that 4 methods satisfy this property and 3 do not. However, Section 2.1 discusses that the way SubgraphX and SAME satisfy the structure awareness is through the "k-hop Shapley" idea. I would not say "k-hop" is structure independent, but it only trivially uses the structure to construct the computation graph. The structure awareness refers to a different concept from the existing structure-aware idea proposed by GStarX, as the computation of HN naturally involves the structure. Thus, the property here can be misleading.
b. Multiple explanations. By multiple explanations, the authors seem to mean whether multiple disconnected pieces are allowed to appear together in the explanation output. However, the name here only makes sense when these pieces have a "or" relationship instead of an "and" relationship. For example, when the ground truth explanation for a prediction is "A and B". It is not proper to say there are multiple explanations, one being "A" and the other being "B", because neither of them is a complete explanation. Also, this multiple-explanation idea seems more like a limitation of the SubgraphX method instead of a property in general. SubgraphX assumed connectedness so there will only be one piece, which can be a good assumption instead of a limitation in some cases, but other methods do not make this assumption. In general, I do not think it is a good idea to take an assumption only one method makes and claiming not having it is a required property.
c. Node-wise/Substructure-wise/global importance. I don't feel this categorization makes full sense. If we just look at the definition, global importance implies substructure-wise importance, which further implies node-wise importance. This is because one explanation can be one connected component, which can be one node. How could one method have the global importance property but not the other two? Like GNNExplainer and PGExplainer in Table 1. Also, I believe that if any of these three importance properties is strictly satisfied, the computational complexity will be exponential. How could SAME satisfy them and be a polynomial algorithm?
I hope the authors can better organize these properties so that they 1. do not have conflicts with existing concepts 2. are properly named 3. are more clearly defined and categorized.
Technical Quality: 1 poor
Clarity: 2 fair
Questions for Authors: My questions are very related to the weaknesses I mentioned above.
1. Why is it intuitive to conclude the expansion-based method is better than the pruning-based method? Can the authors explain more?
2. If I misunderstood the properties in Table 1 in my weaknesses comment, please point it out.
3. I have some doubts about the time complexity analysis. Can the authors be more detailed about why it is O((M1 + M2)n^2)? Also, other methods, like SubgraphX, do not necessarily have time complexity O(2^n), as SubgraphX considers k-hop neighbors, uses MC approximation, and has a budget for the subgraph size.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 1 poor
Presentation: 2 fair
Contribution: 2 fair
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your careful review and comments. Below please find our point-to-point responses to your comments.
**Q1. The level of detail for different parts of the paper can be adjusted.**
Thank you for your excellent suggestions. We agree that we should adjust the detail for Section 2. We have shorter the description of Section 2.1 and provided more details about Section 2.2 following your advice. Please kindly find the revised version in *Overall response: Q5*.
**Q2. Required properties of explanation methods can be made more clear.**
Thank you for giving this thoughtful suggestion. The detailed definitions of the property in Table 1 are as follows.
1. Graph-level tasks: the GNN explanation method can handle the graph classification/regression tasks.
2. Node-level tasks: the GNN explanation method can handle the node classification/regression tasks.
3. Feature interactions: given a graph $G$, when measuring the importance of the explanation result $G_{ex}$, the GNN explanation method can consider the influence of {G/G_{ex}} on the importance of $G_{ex}$.
4. Structure awareness: given a graph $G$, when measuring the importance of explaining the result $G_{ex}$, the GNN explanation method is sensitive to the topology of the given input graph $G$.
5. Multiple explanations: the GNN explanation method can provide the explanation $G_{ex}$ that can be composed of one or more connected components.
6. Node-wise importance: given an input graph $G$ to be explained, for any node $v_i\in G$, its $I(f(\cdot),v_i,G)$ importance will be considered by the GNN explanation method.
7. Substructure-wise importance: given an input graph $G$ to be explained, for any substructure $G_{sub_i}\subseteq G$, its importance $I(f(\cdot),G_{sub_i},G)$ will be considered by the GNN explanation method.
8. Global importance: given an explanation $G_{ex}^i\subseteq G$ consisting of one or more substructures, its importance $I(f(\cdot),G_{ex}^i,G)$ will be considered by the GNN explanation method.
9. Priority-based integration: given an explanation $G_{ex}^j$ with any size, the node $v_i\in\{G\backslash G_{ex}^j\}$ will be added by the GNN explanation method on $G_{ex}^j$ to get a new explanation $G_{ex}^k$, if and only if for any $v_l \in \{G\backslash (G_{ex}^j\cup v_i)\}$, $I(f(\cdot),G_{ex}^j\cup v_i,G) > I(f(\cdot),G_{ex}^j\cup v_l,G)$ holds.
10. Redundancy consideration: given an explanation $G_{ex}\subseteq G$, if $I(f(\cdot), G_{ex}\backslash \{i\}, G)$ $>$ $I(f(\cdot), G_{ex}, G)$ holds, the new explanation $G_{ex}'=G_{ex}\backslash \{i\}$ will be chosen by the GNN explanation method.
We agree that the **structure-awareness** mentioned by the reviewer is inconsistent between the ‘k-hop Shapley’ used in our work and HN-value mentioned by GstarX. The structure-awareness defined in our work is a macro definition. If the GNN explanation method can explicitly (e.g. GstarX) or implicitly (e.g. SubgraphX and ours) process structural information of the input graph, we call it has structure-awareness property.
Your understanding of the **multiple explanations** is correct. We also think that subgraphX's method of finding connected substructures is a good assumption, which can make explanation results more in line with human intuition. Indeed, in the first phase of SAME, we also consider the connectivity of substructures when initializing important substructures set with expansion-based MCTS. However, it needs to be admitted that SubgraphX cannot find the explanation that the ground truth contains multiple substructures at the same time, which is one of their limitations. The lack of ability to find explanations where the ground truth contains multiple substructures will limit the usability of the method in many scenarios. Thus, we listed the *multiple explanations* in the property comparison table.
For the **necessity of distinguishing between Node-wise / Substructure-wise / Global importance**, it is possible for one method to have the global importance property but not the other two. For example in GNNExplainer and PGExplainer, they obtain the explanation by learning a mask for the given graph. Therefore, the explanation results from them only consider the global importance but not substructure-wise importance or node-wise importance. It is also possible for one method to have the node-wise importance property but not the other two. For example, in GstarX, the importance of any substructure is simply the sum of the importance of the node without taking them as a whole to derive the importance. Therefore, it is necessary to distinguish among them.
We admit that if one method is strictly satisfied any of the above three importance properties, it will lead to an NP-hard problem. SAME, like other SOTA methods, uses an approximating algorithm to measure the Shapley value and explore the possible global optimal solution with the help of MCTS.
Note that our approach is based on MCTS, but with a novel two-phase expansion-based MCTS version, which is shown to outperform the subgraphX, and outperform or be competitive with other SOTA methods (see Table 2, 3 in the main manuscript; Table S3, S4 in the supplementary material; *Overall response: Q4* for additional experiments for detailed results).
**Q3. More details about time complexity of SAME. And, other methods, like SubgraphX.**
We refine the computational complexity. Please kindly see the relevant answer to this question in *Overall response: Q1*. As SubgraphX is a pruning-based method, it requires more time in the simulation step, especially when the given graph is very large and the explanation is small.
As Table 2 and Table 3 in the main manuscript, Table S3 and Table S4 in the supplementary material and additional experiments in *Overall response: Q4* show, our SAME significantly outperform SubgraphX in terms of different explainability metric with shorter inference time on different benchmarks.
---
Rebuttal Comment 1.1:
Title: Response to authors
Comment: 1. Thank the authors for adding the time complexity analysis. My current understanding is that the SAME algorithm is polynomial because of MCTS. I hope the authors make this clear in the final version. Otherwise, people may mistake that there is a polynomial algorithm for Shapley value computation.
2. For structure-awareness, I hope the authors can consider how to properly explain it to readers. As you mentioned, there is an "inconsistent between the ‘k-hop Shapley’ used in our work and HN-value mentioned by GstarX".
3. For the other properties, I don't feel my concerns are properly addressed at the moment. I will think about the response more carefully, and I suggest the authors read my original questions again.
3.1 Can the authors discuss more of my question about the "and" and "or" scenario?
3.1 By looking at the definition of property 8 and property 6 again. I still don't get why 8 doesn't imply 6. What if $G_{ex}^{i}$ is a single node? Maybe my misunderstanding is due to the statement "importance will be considered" being too vague. Hope the authors can explain more.
---
Reply to Comment 1.1.1:
Title: Thank you for your further suggestive comments
Comment: **1. Time complexity**
Your understanding that the SAME algorithm is polynomial due to MCTS is correct. We will make this point explicitly clear in the final version of the paper.
**2. Structure-awareness**
We will add the following content in the main manuscript so that it will not cause misunderstanding.
*“We call a GNN explanation method has the structure-awareness property if it can process the structural information of the input graph, whether through explicit mechanisms, as demonstrated by GstarX, or implicit modalities, as exemplified by SubgraphX and this study.”*
**3. Concerns about "Multiple explanations" and "Node-wise / Substructure-wise / Global importance".**
**3.1 Discussion about the "and" and "or" scenario in the "Multiple explanations" property.**
The name of the "Multiple explanations" property can cause misunderstanding. Please allow us to clear this misunderstanding by modifying "Multiple explanations" into "Multi-piece explanation". The "Multi-piece explanation" means whether multiple disconnected pieces are allowed to appear together in the explanation output. We only consider "and" scenario, but not "or" scenario. For example, when "A" and "B" are two orthogonal connected components and the ground truth explanation for a prediction is "A and B", then the GNN explanation method has a probability to provide the correct "A and B" explanation.
The following is why we think "Multi-piece explanation" is an important property of a GNN explanation method.
Finding connected substructures in SubgraphX is a good assumption, which can make explanation results more in line with human intuition. After all, the explanation needs to be human-centred. `Indeed, in the first phase of SAME, we also search connected components when initializing important substructures set with expansion-based MCTS`.
However, it needs to be admitted that SubgraphX cannot find the explanation when the ground truth explanation composes of multiple substructures. The lack of ability to find explanations where the ground truth contains multiple substructures will limit the usability of such a method in many scenarios. Thus, we treated the "Multi-piece explanation" as a promising property and listed it in the property comparison table.
**3.2 Clarification about "Node-wise / Substructure-wise / Global importance".**
We apologize for the misleading of the previous definitions about "Node-wise / Substructure-wise / Global importance". We have modified the name "Global importance" into "Composite-wise importance", and revised the detailed definitions of "Node-wise / Substructure-wise / Composite-wise importance". We hope that the following modifications will make the definition clearer, and we will put the revised version of all properties into the supplementary material.
1. **Node-wise importance**: given any node $v_i\in G$, its $I(f(\cdot),v_i,G)$ importance can be considered by the GNN explanation method.
2. **Substructure-wise importance**: given any substructure $G_{sub_i}\subseteq G$, it can be directly calculated as a whole through the GNN explanation method to obtain its importance $I(f(\cdot),G_{sub_i},G)$. Note that any substructure is a connected component which has more than one node.
3. **Composite-wise importance**: given any composite $G_{com}^i\subseteq G$ consisting of more than two nodes or substructures, it can be directly calculated as a whole through the GNN explanation method to obtain its importance $I(f(\cdot),G_{com}^i,G)$.
Under the above definition, a). it is possible for one GNN explanation method to only have the composite-wise importance property but not have the other two properties. For example, in GNNExplainer and PGExplainer, both of them obtain the explanation by learning a mask with a `predefined size` for the given graph. The explanation results from them only consider the composite-wise importance but not substructure-wise importance or node-wise importance. b). It is also possible for one method to only have the node-wise importance property but not have the other two properties. For example, in GStarX, the importance of any substructure is the `sum of the importance of all nodes in the substructure` without taking them as a whole to derive the importance. Therefore, it is necessary to distinguish among different importance.
In this work, the proposed SAME can consider all node-wise importance (in the first phase). For substructure-wise and composite-wise importance, we admit that covering the search space corresponding to composite-wise importance or substructure-wise importance is an NP-hard problem. This paper provides a method to obtain a high-quality explanation by searching important substructures (in the first phase) and important composites (in the second phase) in polynomial time using expansion-based MCTS. Note that, with enough time, our method has the opportunity to cover the entire structure-wise and composite-wise search space. | Summary: The paper introduces SAME, a novel method for post-hoc explanation of Graph Neural Networks (GNNs). SAME leverages an expansion-based Monte Carlo tree search to explore structure-aware connected substructures and provides explanations that are as explainable as the theoretically optimal Shapley value. Experimental results show that SAME outperforms previous state-of-the-art methods, improving fidelity performance by 7.01% to 42.3% across various benchmarks. The SAME method contributes to the field of GNN explanation by offering theoretical foundations and improved performance.
Strengths: - Theoretical foundation: The paper introduces the SAME method, which provides a theoretical foundation for post-hoc explanation techniques for Graph Neural Networks (GNNs). This adds credibility and rigor to the proposed approach.
- Structure-aware explanations: SAME addresses the challenge of structure-aware feature interactions in GNN explanations. By leveraging an expansion-based Monte Carlo tree search, SAME explores multi-grained connected substructures, resulting in more informative explanations.
- Improved performance: Experimental results demonstrate that SAME outperforms previous state-of-the-art methods in fidelity performance across various benchmarks. The improvements range from 7.01% to 42.3%, indicating the effectiveness of the proposed method.
Weaknesses: - Limited discussion of limitations: The paper lacks a comprehensive discussion of the limitations or potential drawbacks of the SAME method. Addressing the potential shortcomings would strengthen the paper and guide future research directions.
- Limited comparison with alternative methods: While SAME is shown to outperform previous state-of-the-art methods, the paper does not provide a thorough comparison with alternative approaches. Including such a comparison would enhance the understanding of how SAME stands against other techniques.
- Generalizability to different GNN architectures: The paper does not explicitly discuss the generalizability of the SAME method to different GNN architectures. Considering and discussing the potential challenges or adaptations needed for different architectures would enhance the applicability of the proposed method.
- Theoretical complexity and computational efficiency: Although SAME claims to provide explanations within polynomial time, the paper does not provide a detailed analysis of the theoretical complexity or computational efficiency. Further investigation into the computational aspects would be valuable for understanding the scalability of the method.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: - Theoretical Analysis: Could you provide more details about the theoretical foundation of the SAME method? Specifically, how does the expansion-based Monte Carlo tree search contribute to the structure-aware explanations, and how does SAME ensure the explanations are as explainable as the theoretically optimal Shapley value?
- Comparative Analysis: In the paper, SAME is shown to outperform previous state-of-the-art methods in terms of fidelity performance. However, could you provide a more thorough comparison with alternative explanation techniques? How does SAME compare to other state-of-the-art methods in terms of interpretability, computational efficiency, and generalizability?
- Limitations and Future Directions: While the results are promising, it would be helpful to have a more comprehensive discussion on the limitations of the SAME method. What are some potential drawbacks or challenges associated with SAME? Additionally, what future research directions do you envision to address these limitations and further improve the method?
- Applicability to Different GNN Architectures: The paper focuses on explaining Graph Neural Networks (GNNs). Could you provide insights into the generalizability of SAME to different GNN architectures? Are there any specific considerations or adaptations required for applying SAME to other GNN models, such as graph attention networks or graph convolutional networks?
- Theoretical Complexity and Scalability: The paper claims that SAME provides explanations within polynomial time. Could you provide more details on the computational complexity of the SAME method? How does the method scale with increasing graph size or complexity? Are there any limitations or considerations regarding the scalability of SAME to larger graphs?
- Insights on Parameters and Hyperparameters: The paper mentions the use of Monte Carlo tree search and distinct single substructures in the SAME method. Could you elaborate on how the parameters and hyperparameters of these components affect the quality of the explanations? Are there any guidelines or insights you can provide to help researchers fine-tune these parameters effectively?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: - Limited Discussion on Practical Applicability: The paper primarily focuses on the fidelity performance of the SAME method without discussing its practical applicability or real-world constraints. It would be beneficial to address any potential limitations or challenges in applying SAME to real-world scenarios, such as scalability, interpretability, or sensitivity to noise or perturbations in the input graph.
- Lack of User Evaluation: The paper does not include user studies or evaluations to assess the usefulness or understandability of the explanations generated by SAME. Incorporating user feedback or conducting user studies would provide insights into the effectiveness of SAME from a human-centered perspective.
- Absence of Ablation Studies: The paper does not include ablation studies to analyze the contribution of individual components or design choices in the SAME method. Conducting such studies would help isolate and evaluate the impact of different aspects of the method on the overall performance.
- Limited Dataset Coverage: The paper evaluates SAME on a specific set of benchmarks, both real-world and synthetic. However, it would be valuable to assess its performance on a more diverse range of datasets, including those with different graph sizes, properties, and domains.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your careful review and comments. Below please find our point-to-point responses to your comments.
**Q1. The theoretical foundation of SAME. How does the MCTS contribute to the Structure-awareness, and how does SAME ensure the theoretically Optimal?**
1. To our best knowledge, no existing work has considered or formalised the theoretically optimal value of MCTS. The core of our theoretical work is to establish a framework for MCTS-based GNN explanation techniques. Under this framework, we define an error bound that can be widely applied to different MCTS-based methods. We show that our method has lower error bounds than another MCTS-based GNN explanation technique (SubgraphX). In order to explain our theoretical work more clearly, we adjusted the content of Section 2. Please see *Overall response: Q5* for detailed a explanation. Therefore, although in section 2.2, we did not consider enough theoretical foundation in the design part of the algorithm, we realized the "nature closer to the optimal solution" reflected in the comparison between expansion-based algorithms and pruning-based algorithms.
2. Because we use the **k-hop Shapley value** in the expansion-based MCTS, the interaction between interpretation and neighbor nodes will be considered in the calculation process, so as to achieve structural awareness.
3. In our theoretical work, we define an error bound that can be applied to different MCTS-based methods. We show that SAME has lower error bounds than another MCTS-based GNN explanation technique (SubgraphX). Here we adjusted the content of Section 2. Please see *Overall response: Q5* for detailed a explanation.
**Q2. More thorough comparison in terms of interpretability, computational efficiency, and generalizability?**
1. For **interpretability** and **computational efficiency** of our approach. We compared our method with competitive techniques (see Table 2, 3 in the main manuscript, Table S3, S4 in the Supplementary Material).
We add new experiments on Twitter and BACE with additional results on GraphSVX and OrphicX (*Overall response: Q4*).
2. For **generalizability** of our approach, we further validate SAME on Graph-SST2 & Graph-SST5 using GIN and GAT (*Overall response: Q3*). In short, SAME does not need further adaptation on different GNN architectures since it is a post-hoc model-agnostic technique.
**Q3. A detailed analysis of the computational efficiency.**
We refine the computational complexity. Please kindly see the relevant answer in *Overall response: Q1*.
**Q4. Limited discussion of limitations of SAME.**
Please kindly find the related answer in *Overall response: Q2*.
**Q5. Generalizability of SAME? Specific adaptations for GNN like GAT or GCN?**
As the SAME is a post-hoc model-agnostic technique, our approach does not need further adaptation or modification for application to different GNN models. For the **generalizability** of our approach, please kindly refer to *Overall response: Q3*.
**Q6. How the parameters and hyperparameters affect the quality of the explanations? Are there any guidelines or insights?**
The hyperparameters include $\gamma$ and $K$. We randomly select 30 graphs from Graph-SST5 datasets using GCN, and report the fidelity w.r.t. $\gamma$ and $K$ in the following table.
| $\gamma$ \ $K$ | 1 | 3 | 5 | 7 | 9 |
| :------------------: | :-----: | :-----: | :-----: | :-----: | :-----: |
| 1 | 0.25114 | 0.33188 | 0.35484 | 0.39379 | 0.40260 |
| 2 | 0.28510 | 0.31207 | 0.34179 | 0.36521 | 0.37856 |
| 3 | 0.29380 | 0.31802 | 0.32966 | 0.36365 | 0.37412 |
| 4 | 0.31396 | 0.33553 | 0.34911 | 0.36104 | 0.35396 |
| 5 | 0.31396 | 0.33869 | 0.35042 | 0.35909 | 0.35917 |
Although $\gamma$=1 could lead to an optimal fidelity, the explanation may be composed of disconnected nodes, making the it not human-understandable. Therefore, we fine-tuned $\gamma$ and $K$ according to the visualization results. The detailed settings are provided in the Table below.
| Datasets | BA-2Motifs | BBBP | Graph-SST2 | Graph-SST5 | BA-Shapes | MUTAG | Twitter | BACE |
| :----------: | :--------: | :--: | :--------: | :--------: | :-------: | :---: | :-----: | :--: |
| **$\gamma$** | 3 | 5 | 3 | 3 | 5 | 2 | 3 | 2 |
| **$K$** | 7 | 7 | 7 | 7 | 7 | 7 | 7 | 7 |
**Q7. Limitations in applying SAME to real-world scenarios, such as scalability, interpretability, or sensitivity to noise or perturbations in the input graph.**
Thank you for your careful review and suggestions. We answer your questions point-to-point below.
1. We supplement the discussion about **scalability** of our method, please find the related content in *Overall response: Q2*.
2. In terms of the limited discussion about **interpretability**, we beg to differ in that the explanatory benchmark includes synthetic, sentiment and molecular datasets (in Table 2). We also considered different two kinds of fidelity as our metric (in Table S3, S4).
Note that for some data the explanation with the maximum fidelity cannot match the ground truth. Therefore, when applying to a real-world dataset, it is necessary to use different metrics with actual needs.
3. Considering the **sensitivity to noise or perturbations in the input graph** is not generally an issue for the post-hoc graph explanation technique. Because if a graph explanation technique is robust to noise, it may overestimate or underestimate the well-trained GNN that needs to be explained.
Due to the character limit of the response, for other limitations, please find the corresponding responses in the following comments.
---
Rebuttal Comment 1.1:
Title: Additional response to Limitations 2-4 (Q8-Q10)
Comment: **Q8. Incorporating user feedback or conducting user studies would provide insights into the effectiveness of SAME.**
Thank you for your excellent suggestion. Indeed, to the best of our knowledge, existing GNN explanation methods do not incorporate human feedback in the search for explanations. This is an excellent future direction in the field.
**Q9. No ablation studies to analyze the contribution of individual components or design choices in SAME.**
SubgraphX can be treated as our ablation version as they only do substructure searching (phase 1) but not our explanation set searching (phase 2). As Tables 2, 3 Table S3 and S4 show, our method outperforms SubgraphX in terms of different metrics with shorter inference time.
**Q10. Assess performance on a more diverse dataset with different graph sizes, properties, and domains.**
We added new experiments on Twitter [5] and BACE [6] (*Overall response: Q4*). The diversity of graph size in different benchmarks is summarized as follows.
| Datasets | BA-2Motifs | BBBP | Graph-SST2 | Graph-SST5 | BA-Shapes | MUTAG | Twitter | BACE |
| :----------: | :--------: | :--: | :--------: | :--------: | :-------: | :---: | :-----: | :--: |
| **Min Size** | 25 | 2 | 1 | 2 | 700 | 10 | 3 | 10 |
| **Max Size** | 25 | 132 | 56 | 56 | 700 | 28 | 73 | 97 |
**Reference**
[5] Explainability in graph neural networks: A taxonomic survey. 2022.
[6] MoleculeNet: a benchmark for molecular machine learning.2018. | Summary: The paper proposes a novel method for explaining GNNs called SAME. SAME addresses the challenges of structure-aware feature interactions in GNN explanation by using an expansion-based Monte Carlo tree search. The authors evaluate SAME on a variety of benchmarks and show that it outperforms previous state-of-the-art methods. The paper makes contributions to the field of GNN explanation.
Strengths: - SAME addresses the challenges of structure-aware feature interactions in GNN explanation by using an expansion-based Monte Carlo tree search, which is novel
- The authors evaluate SAME on a variety of benchmarks, and the experiments show that SAME outperforms previous state-of-the-art methods on these benchmarks.
- The theoretical aspects of SAME are well-developed and provide a good foundation for the proposed method.
Weaknesses: - The authors could provide more information on the limitations of SAME, particularly in terms of the types of graphs and datasets for which it may not perform as well.
- The authors could provide more insight into the computational complexity of SAME and how it scales to larger graphs and datasets.
- The technical content is not sufficient, for example, it does not fully use the 9-page limit.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: How scalable is SAME? Shapley-based explainability methods are generally more expensive than gradient-based methods.
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 2 fair
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your time and thorough comments! Below please find our point-to-point responses to your comments.
**Q1. More insight into the computational complexity of SAME and how it scales to larger graphs and datasets.**
For providing insight into the computational complexity and scalability of SAME, we refine the computational complexity. Please kindly see the relevant answer to this question in *Overall response: Q1*.
**Q2. More information on the limitations of SAME, particularly in terms of the types of graphs and datasets for which it may not perform as well.**
Please kindly find the related answer to this question in *Overall response: Q2*.
**Q3. How scalable is SAME? Shapley-based explainability methods are generally more expensive than gradient-based methods.**
As we discussed in our answer to *Overall response: Q2*, our method may take a long time to get an explanation on large graphs, which may be more expensive in time than gradient-based methods.
However, would like to point out that the proposed SAME has obtained the SOTA explanation results (see Table 2 in the main manuscript: fidelity = 0.214±0.000) on the current explanation benchmark with the largest graph size (see Table below: BA-Shapes with size = 700) in a relatively short period of time (see Table 3 in the main manuscript: inference time = 14.08s). This indicates that our method has the possibility to work well on larger graphs.
Table. The min / max graph size in different datasets.
| Datasets | BA-2Motifs | BBBP | Graph-SST2 | Graph-SST5 | BA-Shapes | MUTAG | Twitter | BACE |
| :--------: | :----------: | :----: | :----------: | :----------: | :---------: | :-----: | :-------: | :----: |
| **Min** | 25 | 2 | 1 | 2 | 700 | 10 | 3 | 10 |
| **Max** | 25 | 132 | 56 | 56 | 700 | 28 | 73 | 97 |
---
Rebuttal Comment 1.1:
Comment: I have read the authors' responses and I appreciated the efforts made by the authors. I plan to keep my current rating of 6 for the submission. | Rebuttal 1:
Rebuttal: Dear Area Chairs and Reviewers,
We appreciate the valuable feedback and suggestions from the reviewers. Overall, the reviewers deem our paper well written, our method "novel" (qWxr,dDYx) and "effective" (8xQs), our theoretical analysis "well-developed" (qWxr), our evaluation results "standard" (xcjE). They also asserted that the problem studied in our paper "makes contributions to the field of GNN explanation" (qWxr, 8xQs).
In the following content, we make a general response to the questions that several reviewers are concerned about.
**Q1. More insight and details about the computation complexity of SAME.**
To better analyze the limitation (discuss in Q2) of our proposed method, we refine the computational complexity of our approach as follows.
Given a graph $G=(V,E)$, we take a single iteration as an example which includes selection, expansion, simulation and backpropagation [1].
For the **important substructure initialization phase**, the goal of our SAME is to figure out a group of important substructures in *Important Substructure Set* whose sizes are smaller than $\gamma$. In the *selection step*, our method chooses an unvisited node or chooses the node with the highest reward if all nodes have been visited ($\mathcal{O}(|V|)$). In the *expansion step*, our method selects the node within 1-hop neighbors with the largest value, which requires $\mathcal{O}(1)$. In the *simulation step*, in the worst case SAME will append all the nodes by visiting all the edges in the graph which requires $\mathcal{O}(|V|+|E|)$. For the *backpropagation step*, time complexity is bounded by $\mathcal{O}(\gamma)$, as the maximum size of a substructure (depth of the search tree) is $\gamma$. Since we perform $M_1$ iterations, according to the law of multiplication, the time complexity of the first phase $\mathcal{O}(M_1\gamma|V|^2+M_1\gamma|V|\times|E|)$.
For the **explanation exploration phase**, similar to phase 1, in the *selection step*, the time complexity requires $\mathcal{O}(K)$ since the size of *Important Substructure Set* equals $K$. The *expansion step* requires $\mathcal{O}(1)$. In the *simulation step*, our method will append other substructures to the current state until the explanation size exceeds the sparsity limit, which is bounded by $\mathcal{O}(2^{K})$. We set a maximum simulation time $t_s$ as a time budget that requires $\mathcal{O}(t_s)$ since $\mathcal{O}(2^{K})$ can be very large. The *backpropagation step* is bounded by $\mathcal{O}(\frac{|V|\times(1-sparsity)}{\gamma})$, where $\gamma$ is the maximum size of each substructure. With $M_2$ iterations, the time complexity in this phase is $\mathcal{O}(M_2 K t_s \frac{|V|\times(1-sparsity)}{\gamma})$.
Overall, the time complexity of SAME is $\mathcal{O}(M_1\gamma|V|^2+M_1\gamma|V|\times|E| + M_2 K t_s \frac{|V|\times(1-sparsity)}{\gamma})$. As the $t_s$, $\gamma$, $K$, $M_1$, $M_2$ and $sparsity$ are predefined, therefore SAME is a polynomial-time method under these constraints.
**Q2. More information about limitations of SAME.**
The limitations of our approach can be summarized as follows.
1. Scalability on large graphs. The time complexity of our method is $\mathcal{O}(M_1\gamma|V|^2+M_1\gamma|V|\times|E| + M_2 K t_s \frac{|V|\times(1-sparsity)}{\gamma})$.
For searching large graph ($|V| $ is large) explanation,
a. $|V|^2$ will become large.
b. the $K$ needs to be larger accordingly to find the related explanation.
c. the $t_s$ needs to be increased so that MCTS can converge.
d. when the input graph is dense (|E| is large), $|V|\times|E|$ will also become large.
2. Approximation of Shapley value. For time efficiency, the Shapley value can only be computed approximately. Under this approximation, the fairness axioms no longer hold. This is also identified as an unsolved issue for the Shapley value in machine learning by [2].
**Q3. Generalizability of SAME on different GNN architectures**
Our method does not need further adaptations on different GNN architectures.
For the generalizability analysis, we tested SAME on GAT and GIN. The results are provided in Table 1 in the uploaded pdf. The results of the other two alternative explanation techniques (GraphSVX [3] and OrphicX [4]) are also provided.
**Q4. Further comparison of our SAME and alternative explanation techniques on other benchmarks**
To make our evaluation more convincing, we add new experiments on the text sentiment network Twitter [5] and molecular network BACE [6]. The results are presented in Table 2 in the uploaded pdf.
In the following individual response, we address all the raised questions and add some new experiment results to further strengthen our contributions.
**Reference**
[1] A survey of monte carlo tree search methods. 2012.
[2] The shapley value in machine learning. 2022.
[3] Graphsvx: Shapley value explanations for graph neural networks. 2021.
[4] Orphicx: A causality-inspired latent variable model for interpreting graph neural networks. 2022.
[5] Explainability in graph neural networks: A taxonomic survey. 2022.
[6] MoleculeNet: a benchmark for molecular machine learning.2018.
Pdf: /pdf/d441ac7f7719fb38c26f9618d7641f7999d265d9.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Discovering General Reinforcement Learning Algorithms with Adversarial Environment Design | Accept (poster) | Summary: This paper focuses on the problem of learning update rules in reinforcement learning. By combining the advantages of the Learned Policy Gradient (LPG) algorithm and the Unsupervised Environment Design (UED) technique, the authors propose a meta RL algorithm, namely, GROOVE (General RL Optimizers Obtained Via Environment Design), to address the generalization problem for meta RL algorithms are applied to unseen environments. Accounting for the curricula generation of the meta-learned optimizer, the authors also design a metric named algorithmic regret (AR) to evaluate the regret during the meta-training, then be used to guide the learning of meta-learner. A series of experiments also present that GROOVE outperforms baselines on the Atari games.
Strengths: 1. This paper is well-organized, and the authors have a throughout introduction for the related work.
2. Experiments show that the designed AR outperforms existing metrics.
Weaknesses: 1. The approximation of algorithmic regret seems heavily relies on the choice of an oracle algorithm as UEDs, i.e., the choice of antagonist agent, which would be a performance bottleneck for GROOVE. Thus, there would be better if the authors demonstrate an ablation study on using different antagonist agents, i.e., training different RL algorithms.
2. The novelty is limited, as GROOVE is a direct combination of UED and LPG.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Whether different antagonist agent will affect the performance?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your positive feedback about our writing and results. We have introduced **new results and edits in our summary response**, which we encourage you to read. Please find our response to each of your comments below:
1. Thank you for this suggestion, we agree that this is an interesting idea and have now performed this experiment, comparing random, expert, A2C, and PPO antagonists (**see new results**). The results verify the need for general RL algorithms as the AR antagonist and demonstrate how the choice of RL algorithm impacts generalization.
2. Whilst it is true that GROOVE builds on LPG and PLR, the primary message of our paper is that combining these existing methods is insufficient without our novel component (AR). This is demonstrated by our ablation study in Section 4.5, which shows the failure of existing metrics and the success of AR. To make this clearer, we have added a list of contributions, including this point, to the introduction (**see manuscript edits**).
We also note that our PMO formalism, the application of environment design to PMO, and our analysis of how meta-training distributions impact generalization in this setting are entirely novel.
Given the addition of your suggested experiment, along with our clarification and edit highlighting the paper’s novelty, we hope that you will consider raising your score to a stronger accept? Thank you!
---
Rebuttal Comment 1.1:
Title: Thanks for your response
Comment: I thank the authors for their response, and I'll keep my score as I do not think there is a significant novelty. | Summary: The paper "Discovering General Reinforcement Learning Algorithms with Adversarial Environment Design" proposes an automatic curiculum approach for meta-learning of RL optimizers. It is based on a notion of regret of the optimizer, to choose environment parameters that are at a good level for the optimizer to leverage feedback from RL training steps. Since the notion of regret is intractable to be exactly computed, authors introduce a notion of algorithmic regret, which is an approximation. Results show intersting properties of the method.
Strengths: - Meta-learning of RL optimizers is an important problem
- The proposal looks to contain innovative components
- Interesting results
Weaknesses: - The paper is very hard, not to say impossible, to follow, as many many different components are not sufficiently defined (when not defined at all). To be useful, a research paper must contain every formalization and definition to allow an educated researcher to understand and reproduce the contribution. While I am not fully familiar with meta-learning of optimizers, I have a strong background in RL and I am still unable to put everything together in that paper to understand the proposal. I really think the paper should be fully rewritten before considering publication anywhere. Here is a (non-exhaustive) list of things that are really difficult to understand:
- U_\eta is introduced in 2.1 with y_t,\pi_t=U_\eta(x_t|x_t+1...x_T). I understand it is an LSTM on sequence x_t+1...x_T but why giving x_t| here ? Is it a distribution of x_t given the sequence ? This does not fit with the fact to feeding this to variables y and \pi. Is it the result of a LSTM step after feeding x_t in the sequence ? but x_t contains y_\theta(s_t) so it does not make sense to me.
- In 2.1 we do not know how is trained U_\eta (\eta discussed only in 3.1) so it is very diffcult to understand what it captures.
- what is y_\theta and why does it subscripts \theta as the policy \pi does ? policy and boostrap share same parameters ? what is it supposed to capture ?
- in 2.2 V gets a meta-optimizer F as input, in 3.1 it takes parameters and in 3.2 an RL algorithm
- G in 3.1 is not defined : is it the regret ?
- \cal{H} is not defined
- F_\eta not well defined, we do not really know what controls \eta
- Algo 1 : "Compute meta-gradient for \eta with updated \theta (eq 2)" ==> eq 2 is an expectation of expected return, not a gradient
- Algo 1 : "update \Lambda with R and phi" => what does it concretely mean ? You store the regret associated with phi in the buffer ?
- Algo 1: What means initialize lifetimes? How is it done ?
Technical Quality: 2 fair
Clarity: 1 poor
Questions for Authors: see weakness above
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 1 poor
Contribution: 3 good
Limitations: ..
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your kind words about our problem, method and results. We appreciate the time you have taken to give such extensive feedback regarding our formalization. We address each of your concerns below and refer you to the **edits described in our summary response**, which we believe will resolve much of the stated confusion. If you have time, we encourage you to revisit the manuscript’s formalism after reading our edits and clarifications, which we hope will lead you to the same clarity and ease of understanding described by the other reviewers.
* Thank you for spotting this, there is a mistake where the bootstrap vectors are not subscripted with \theta, which is likely the source of the confusion (**see line 77 edit**). However, there’s a minor mistake in your question, since the output of LPG is “\hat{y}\_t, \hat{\pi}\_t”, not “y\_t, \pi\_t”. Hopefully, this answers your questions since, as we state, “LPG outputs targets \hat{y}\_t, \hat{\pi}\_t” for y\_t and \pi\_t (targets being denoted by the hat), which are distinct from the agent outputs contained in x\_t, being the “probability of the chosen action \pi\_theta(a\_t|s\_t) and bootstrap vectors for the current and next states y\_\theta(s\_t), y\_\theta(s\_{t+1})”.
* Regarding the | notation in U_\eta(x\_t|x\_{t+1}...x\_T), you are correct that this is done to reflect the reverse-recurrent processing of tokens in LPG. Without this constraint, this function could be generalized to U\_\eta(x\_t,x\_{t+1},...x\_T). However, we believe this notation is instructive for understanding the working of LPG, which processes each token individually based on the recurrent state from all future tokens.
* We have now included the definition of \eta, in order to make clear that it parameterizes LPG (**see line 74 edit**). On lines 68-71, we present an intuitive interpretation of the information learned by \eta, followed by formally defining the function it parameterizes on lines 74-78. The loss for \eta is presented immediately after in the following subsection (Equation 2) once we have formally defined the PMO problem setting in the context of UED. We hope this makes \eta clearer; if not, is there any other information you believe would do so?
* This confusion also likely results from our omitted definition of \eta (**see line 74 edit**). y\_\theta is defined as the function producing “bootstrap vectors for the current and next states y\_\theta(s\_t) and y\_\theta(s\_{t+1})”, which share the “agent parameters \theta” with \pi\_theta. On line 68-71, we provide an explanation of bootstrap vectors and their relation to critics in RL, making clear the information they capture.
* Whilst this is a common abuse of notation, we already make a point of explaining the overloading of function V each time it is performed (lines 135-136 and 149) to avoid any confusion.
* This is a good spot, thank you. We have added a definition of G to our edits (**see line 135 edit**).
* \cal{H} is a capital \eta, denoting the space of the variable \eta. Whilst this use of capitalization for parameter space is common, we acknowledge that capital \eta is uncommon so we have added an explicit definition on line 135 in our edits to avoid confusion (**see line 135 edit**).
* \cal{F} is defined as “a meta-optimizer \cal{F}: \Theta \times \bb{T} -> \Theta” on line 95. The statement “the update rule parameterized by \eta” on lines 136-137 and our new definition of \eta (**see line 74 edit**) should now make its meaning clear.
* Algorithm 1 is presented in pseudocode and intended to give a high-level overview of GROOVE’s meta-training, much like Figure 2. This is done to improve the reader’s ease of understanding the method since exhaustively defining all operations here would make the algorithm opaque. All algorithm components are individually detailed in greater depth elsewhere in the manuscript. With this in mind, we respond to each of your related comments below:
* It is true that Equation 2 is not a gradient. However, this reference is intended to contextualize a line of non-formal pseudocode rather than formally define the gradient computation, and it trivially follows that the meta-gradient is the gradient of the meta-objective given in Equation 2, so we do not believe that restating it separately adds clarity.
* That’s right, the update primarily stores the regret associated with phi in the buffer. However, the entire PLR procedure is more involved than that and would make the algorithm significantly more verbose to include in its entirety (see the original PLR work, [1]). Therefore we define \Lambda as the “PLR level buffer” in the algorithm rather than outlining the entire PLR procedure here.
* Initializing a lifetime involves sampling the level \phi from the “PLR level buffer \Lambda” and the agent parameters \theta from their “initialization function p(\theta)”. This is presented in the algorithm, with the line “Reinitialise lifetime (\phi, \theta) ~ \Lambda \times p(\theta)”.
We thank you again for taking the time to find these oversights and hope that our edits and clarifications resolve any remaining confusion. In light of these and the opinion of the other reviewers that the paper is “well-structured/organized” (qWd8, f1yc), “easy to understand” (qWd8), and “provides a clear explanation of the underlying concepts” (TbjH), we wonder if you would be willing to increase your score in order to reflect the contributions that you mention?
[1] M. Jiang, E. Grefenstette, and T. Rocktäschel. Prioritized level replay. In International Conference on Machine Learning, pages 4940–4950. PMLR, 2021
---
Rebuttal Comment 1.1:
Title: Thanks
Comment: Thanks to authors for their insightful answers that help le better understand the contribution.
Despite this and ither revierwers opinions, still think the paper should be reorganized to help the reader. At least the global objective should be clearly formalized at the begining of section 2. Maybe 2.2 before 2.1 would be easier to read ? There is also still no clear definition of what is a lifetime that is core of the algorithm. I understand arguments if authors that claim that algo1 is high level picture, but I feel that in that form it does not help. Giving some further details in it would be very insightful. Also, a picture that would include all notation to illustrate the learning process would be very helpful (figure 1 is not).
I still think that this paper, even if I am sure present very interesting contribution that I would like to see published soon, is not ready for publication at this round. Although, I won't oppose to its acceptance if AC and other reviewers like it. Thus, I increase my score to bordeline reject to reflect this.
---
Reply to Comment 1.1.1:
Title: Response and further additions
Comment: Thank you very much for your prompt reply and for taking our rebuttal into consideration!
We appreciate and agree with your remaining feedback about the paper’s structure. Therefore, **we have implemented all of your remaining suggestions** for the camera-ready copy (or next revision). Namely,
* *Section 2*: Move the problem formulation (lines 61-66 and 80-105) to 2.1 and the LPG background (lines 67-78) to 2.2,
* *Line 98*: Extend the definition of a lifetime to include the task \phi and the agent parameters \theta,
* *Algorithm 1*: Reference the relevant paper section for each operation (to maintain the algorithm’s simplicity whilst making further details easy to locate),
* *Supplementary materials*: Add a summary of the paper’s notation.
We hope these edits sufficiently address the remainder of your feedback. Following their implementation in the camera-ready copy, we are confident that the paper is ready for publication. If you disagree, **are there any outstanding concerns keeping the paper below the acceptance threshold?**
Thank you again for your feedback, it has greatly improved the paper’s clarity! | Summary: The paper presents an approach, i.e. GROOVE, to learning an update rule for generalization on unseen tasks automatically, based on the idea of Unsupervised Environment Design, where a student agent is trained on an adaptive distribution of environments proposed by a teacher and the teacher seeks to propose tasks which maximize the student’s regret. The authors also introduce a new concept of algorithmic regret, which is used to approximate regret and automatically generate curricula. The results of experiments comparing GROOVE and LPG demonstrate the superiority of GROOVE in terms of generalization.
Strengths: 1. The proposed method is well-motivated and the authors provide a clear explanation of the underlying concepts.
2. GROOVE, especially the use of algorithmic regret, is simple to understand and effective that is well-supported by the experimental results and analysis.
Weaknesses: 1. The paper could benefit from listing its contributions, e.g. in the first chapter, which helps readers capture its novelty faster.
2. More detailed discussion of the connection between GROOVE and related works in the field of RL and meta-learning (i.e. in the part "Alternative Approaches to Meta-Reinforcement Learning") will be helpful for readers to address this work.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. Have the authors tried other RL algorithms as the baseline in the Algorithmic Regret? if so, how do they perform?
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: as stated in the weakness
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your feedback and overall positive review of our work. We have provided **edits and new results in our summary response**, which we encourage you to read. Please find our response to the weaknesses below:
1. We thank you for this suggestion and agree that listing contributions would be an effective format, so have **added a contribution list in our edits**.
2. Since GROOVE is based on ideas from PMO and UED, we discuss its connection to each of these fields in the “Policy Meta-Optimization” and “Unsupervised Environment Design” sections of the Related Work. Here, we highlight GROOVE’s improved robustness and generalization performance on OOD tasks. Regarding the wider field of meta-RL, we contrast this with the PMO problem setting as a whole, rather than just GROOVE, in the “Alternative Approaches to Meta-Reinforcement Learning” section. This avoids us directly comparing GROOVE to methods outside of PMO, which aim to solve different problems.
In response your question, suggesting that we evaluate alternative RL algorithms for Algorithmic Regret (AR), we have now performed and included this experiment (**see new results**). We thank you for this suggestion, as we believe it demonstrates interesting properties of AR: that providing the optimal policy or a random agent is insufficient and can select artificially difficult or easy levels, whilst the choice of RL algorithm and its capability on the meta-training distribution also influences generalization.
We hope that the addition of your suggested experiment has provided the insights you hoped for, and that our edit and clarifications have sufficiently addressed both of your concerns. If so, would you consider increasing your rating and/or contribution scores? Thank you! | Summary: The authors present a meta-learning method to enhance an RL algorithm's performance across diverse RL tasks. They utilize the concept of unsupervised environment design (UED) methods for training RL agents and adapt this approach for the task of meta-learning a policy optimizer. The authors propose a novel algorithmic regret (AR) to facilitate UED and confirm its superior performance to L1 value loss and positive value loss through experiments. The experiments verify the effectiveness of proposed method.
Strengths: 1. The paper is well-structured and easy to understand.
2. A thorough review of related studies is provided.
3. The proposed idea, while simple, appears logically sound. However, as I am not intimately familiar with the domain of reinforcement learning, I am unable to evaluate its novelty.
Weaknesses: 1. Could the authors further elaborate why the proposed Algorithmic regret is better than the approximation method before? Some intuitive discussions are necessary in addition to only experimental results. Moreover, AR is compared to L1 value loss and positive value loss in Sec. 4.5. It seems there are also some other loss in Lines 16-25. Could the authors clarify the reason that the comparison between AR and these loss is not conducted?
2. It would be informative if the authors could discuss the scenarios depicted in Fig. 1 where Groove performs less effectively than LPG.
3. "The result is our method" in abstract could be rephrased.
4. It is mentioned that "By examining how characteristics of the meta-training distribution impact the generalization of these algorithms". It is unclear to me where it is reflected in the paper. Could the authors further illustrate it?
Minor:
Line 50 Section 4.4 -> Section 4.5
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Please refer to the 'Weaknesses' section above for questions and clarifications.
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: No.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your supportive review of our method, evaluation, and clarity. In response to your comments, we have provided **new results in the author rebuttal**. Please find our response to each of your comments below:
1. We have now added a comparison of A2C, PPO, expert, and random antagonist agents in AR, in order to further explain the role of the antagonist on AR’s performance (**see new results**). Additionally, we hope the simplicity of the method and the interpretation of AR as a proxy for informative levels (see Section 4.3) allows the reader to form a strong intuition for how it achieves its performance. Beyond this, we do not provide speculative interpretations for why AR succeeds in the paper, but we hypothesize that levels identified by AR will reflect the biases of the antagonist agent. Therefore, using generalizable algorithms (A2C, PPO) rather than specialized (expert) or weak (random) algorithms as the antagonist should identify levels with common properties.
Regarding the omitted regret metric (maximum Monte Carlo) mentioned on line 124, we do not evaluate against this baseline since it is only a minor adaptation of positive value loss, designed to handle sparse-reward environments [1]. Since we meta-train on Grid-Worlds, where objects frequently respawn and can be reached in a small number of steps, we do not require this adaptation and therefore use the original form of positive value loss.
2. Understanding the properties of optimizers learned by GROOVE and LPG is an interesting and important line of work. Since the application of environment design to PMO is an entirely novel topic, the focus of this work is to extensively demonstrate its potential to improve generalization performance. To achieve this, we compare both in-distribution robustness (Figure 5) and out-of-distribution performance on Min-Atar (Figure 6), Atari (Figures 1 and 4) and now Procgen (**see new results**), providing a comprehensive demonstration of GROOVE’s effectiveness. Since these benchmarks are not designed to test interpretable capabilities of agents, we were unable to find any common theme between the settings on which GROOVE outperformed LPG. However, we plan on performing this analysis in future work.
3. Thank you for pointing this out, we see how it could be unclear. We will rephrase this sentence.
4. We perform this analysis in Section 4.2, “Designing Meta-Training Distributions for Generalization”. In this, we evaluate the OOD performance of LPG whilst manually controlling the diversity and informativeness of meta-training tasks, thereby relating characteristics of the meta-training distribution to the algorithm’s generalization performance.
We hope that we have clarified each of your questions and will be sure to edit the sentence discussed in Q3. Please let us know if any of these points remain unclear, or if you have further questions. If not, in light of these responses, our new results, and your praise of our method, evaluation, and clarity, would you consider upgrading your score above a borderline? Thank you!
[1] M. Jiang, M. Dennis, J. Parker-Holder, J. Foerster, E. Grefenstette, and T. Rocktäschel. Replay guided adversarial environment design. Advances in Neural Information Processing Systems, 34: 1884–1897, 2021
---
Rebuttal Comment 1.1:
Title: Thank you
Comment: Thank you for your response to my comments. I acknowledge the efforts made to address the questions I raised. Because I am not quite familiar with the reinforcement learning topic, I will maintain my score and be attentive to the other reviewers' thoughts. | Rebuttal 1:
Rebuttal: Thank you to all of the reviewers for their detailed and insightful feedback.
We appreciate the positive comments describing our problem setting as **important** (8Dkv, 4F7T) and **well-motivated** (TbjH), in addition to our proposed method containing **innovative components** (4F7T), with Algorithmic Regret (AR) being described as **novel/new** (8DkV, qWd8, TbjH). Furthermore, we are thankful that **all reviewers provide positive feedback regarding our results**, describing them as great (8DkV) and interesting (4F7T), whilst showing the method is well-supported (TbjH), achieves superior performance to existing metrics (qWd8, f1yc) and can even [transfer] from Grid-World to Atari (8DkV). Finally, we thank the reviewers for their kind words regarding the paper’s writing, specifically that it is **well-structured/organized** (qWd8, f1yc) and **easy to understand** (qWd8), providing a **clear explanation** of the underlying concepts (TbjH) and a **thorough introduction** to related work (qWd8, f1yc).
## Manuscript edits
* Reviewers 8DkV and 4F7T pointed out omissions in our formalism. As a result, we have **added the following correction and new definitions** to rectify their concerns,
* *Line 74*: Define \eta as the meta-parameters of LPG,
* *Line 77*: “y(s_t) + y(s_{t+1})” => “y_\theta(s_t) + y_\theta(s_{t+1})”,
* *Line 135*: Define G as expected return and capitalized \eta as the space of the meta-parameters, \eta.
* Reviewers 8DkV and f1yc raised a concern over our proposed method, GROOVE, containing existing components from UED and PMO. Firstly, the application of UED to PMO is already a novelty. Furthermore, our results underscore that combining existing methods (PLR and LPG) is insufficient without our novel component (AR). In addition to this, reviewer TbjH suggested that we list contributions in the introduction to help readers capture this and other novelties faster. Therefore, we add the following **list of contributions** to the end of the introduction (*line 58*):
* In order to distinguish this problem setting from traditional meta-RL, we provide a novel formulation of PMO using the Meta-UPOMDP (Section 2).
* We propose AR (Section 3.2), a novel regret approximation for PMO, and GROOVE (Section 3.3), a PMO method using AR for environment design.
* We analyze how features of the meta-training distribution impact generalization in PMO (Section 4.2) and demonstrate AR as a proxy for task informativeness (Section 4.3).
* We comprehensively evaluate GROOVE against LPG, demonstrating improved in-distribution robustness and out-of-distribution generalization on Min-Atar, Atari, and Procgen (Section 4.4).
* We perform an ablation of AR, our novel component, demonstrating the insufficiency of existing methods (PLR and LPG) without AR, as well as the impact of the antagonist agent in AR (Section 4.5).
* We release our implementation of GROOVE, PLR and LPG, capable of single-GPU meta-training in 3 hours.
## New results (see PDF)
* Reviewers TbjH and f1yc suggested an **ablation of the antagonist agent in AR** to determine its impact on the method’s effectiveness, whilst reviewer 8DkV asked for further insights about AR. In response to both of these, we have performed the suggested ablation and will add it to Section 4.5.
* *Figure 9*: On Min-Atar, using a random or optimal agent as the antagonist for AR results in lower performance than using A2C or PPO on all environments. Furthermore, using A2C achieves higher performance than PPO on all environments.
* *Figure 10*: PPO achieves lower performance than A2C on Grid-World, with a larger gap on difficult, handcrafted Grid-World levels. This explains the previous results, as PPO will be inferior at identifying difficult levels when used as the AR antagonist. Furthermore, the update parameterized by LPG is capable of representing A2C, but not PPO. This implies that levels solvable by A2C should also be solvable by LPG, making them useful for training. In contrast, PPO may identify levels that cannot be solved without components found in PPO (clipping, mini-batch iterations) but not LPG.
* Reviewer 8DkV suggested we further evaluate **GROOVE vs LPG on Procgen**, which we have now included and will add to Section 4.4.
* *Figure 11*: After meta-training on Grid-World, we observe superior GROOVE performance on 2 out of 4 Procgen environments, superior LPG performance on 1 environment, and no difference on the remaining environment. We note that A2C is very weak on Procgen, failing to learn on the majority of environments, so we selected a subset of Procgen levels that A2C managed to learn in preliminary experiments. Procgen poses a robustness challenge that has required an extensive amount of further research to solve, using components not found in LPG or GROOVE.
In summary, we have introduced edits to address the reviewers’ concerns regarding writing and novelty, in addition to new results to provide further insights about our method. In light of these, we welcome any further feedback and hope the reviewers will consider raising their original scores.
Pdf: /pdf/131924b9c89b71839e36849949aed2f98e2e2c13.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: This paper proposed a new framework, GROOVE, to solve a new Meta-UPOMDP problem. It also introduces algorithmic regret (AR) to approximate the regret to update the curator and generator (sampler?). The results show that the meta-optimizer learned by GROOVE can be used to improve the games in Atari.
Strengths: 1. The proposed problem Meta-UPOMDP is important.
2. The experiments show that the method can work even from grid-world to atari.
3. The AR idea is novel.
4. The results of AR in Fig. 6 is great.
Weaknesses: 1. Several definitions are absent, including the sampler in Fig2 and the symbol \eta mentioned in line 74.
2. It would be beneficial to introduce PMO earlier. The headline and abstract might lead readers to expect traditional RL and few-shot meta-learning.
3. It is recommended to conduct tests on additional environments, such as Procgen. Even just for evaluating the meta-optimizer learned in Grid-World.
4. Since AR constitutes a crucial aspect of the work, further insights should be provided.
5. The majority of the work involves combining existing works (LPG and PLR).
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: 1. How do you select algorithm A in equation 4? Including the hyperparameter of the algorithm.
2. The claim in 209 seems similar to [1]. Please explain the difference.
3. What is the probability of using the generator and the curator?
[1] Li, Y., & Zhang, C. (2022). On Efficient Online Imitation Learning via Classification. Advances in Neural Information Processing Systems, 35, 32383-32397.
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 3 good
Limitations: Mentioned in weakness.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank you for your kind words regarding the problem’s importance, the novelty of algorithmic regret (AR), and the demonstrations of its performance. We encourage the reviewer to **read the edits and new results outlined in the author rebuttal**. We respond to each of your comments below:
Weaknesses:
1. Thank you for pointing these out, we have now added the definition of \eta (**see line 74 edit**). The use of “sampler” rather than “generator” (see Section 3.3) reflects that this component generating random levels in PLR, rather than learning a generative model. We will highlight this distinction in the caption.
2. We agree that the distinction between PMO and traditional meta-RL is important. No formal distinction between the two settings has been proposed in prior work, which is why we introduce PMO as a novel problem formulation in this work. Since readers will be unfamilar with this terminology, we use existing terminology - such as “discovering RL algorithms” in the title and “meta-learn update rules” in the abstract - to describe the setting before mentioning it in the introduction and formalizing it in Section 2.1.
3. This is a good suggestion and we have now added your suggested experiment of GROOVE vs. LPG on Procgen (**see new results**). We chose to prioritize Atari for our original evaluation to be consistent with the environments used in the original LPG paper, but we hope this provides the additional insights you hoped for.
4. We have now added a further experiment for AR, evaluating the impact of different antagonist agents on generalization performance (**see new results**). In addition to this, we believe Figure 3 provides strong insight into AR as a proxy objective for the informativeness of levels, whilst Figure 6 directly compares it to existing metrics, demonstrating its effectiveness.
5. Whilst it is true that GROOVE builds on LPG and PLR, the primary message of our paper is that these existing methods fail to work together without our novel component, AR. This is demonstrated by our ablation study in Section 4.5, which shows the failure of existing metrics and success of AR. To make this clearer, we have added a list of contributions, including this point, to the introduction (**see manuscript edits**).
Questions:
1. For generality, we define AR to be agnostic to the underlying RL algorithm. In our experiments, we selected A2C due to it being used by LPG at meta-train time, using existing hyperparameters for A2C from prior work. We detail these in Table 2 of the supplementary material, but will add a clarification of the source of our hyperparameters there.
2. Thank you for referring us to this work, the similarity is very subtle. The foremost difference is problem setting: they study algorithms for online imitation learning, rather than policy meta-optimization algorithms. In addition to this, they analyze sample complexity in the number of interaction rounds, expert annotations and oracle calls, whilst we fix the number of environment interactions (samples) and control the number of training environments.
3. This can be found in Table 1 of the supplementary material. The probability of using the curator (known in UED literature as “replay probability”) is a tunable hyperparameter of the algorithm. We use 0.5, which we selected from preliminary tuning and prior PLR implementations.
In summary, we hope that the referenced edits have resolved Weaknesses 1 and 5, our new results resolve Weaknesses 3 and 4, and that we have provided sufficient clarification for Weakness 2 and your questions. We thank you again for your praise of the importance, novelty, and evaluation of this work. In light of our response, would you consider increasing your score to an accept? Thank you!
---
Rebuttal Comment 1.1:
Title: Official comment
Comment: 1. Thank you for providing the results of Procgen. However, the experiments do not train for enough steps. Normally, Procgen needs to train for 200m steps. Hence, it is hard to tell about the improvement.
2. Thank you for providing experiments on using different algorithms as regret antagonists. However, it seems like most algorithms are bad besides A2C. These results lowered my confidence in the method.
---
Reply to Comment 1.1.1:
Title: Clarifications
Comment: 1. Our results show Procgen training for **200M environment steps**, which is equivalent to **50K train steps** (agent updates). Therefore, we do follow the standard procedure.
2. These results are entirely consistent with our hypothesis: AR with A2C or PPO as the baseline **outperforms specialized algorithms on every environment and all other alternative regret measures on 3/4 environments**. Furthermore, due to the time allowed for rebuttal, we were unable to fully tune our PPO implementation. Despite this, we hypothesize that PPO would still underperform A2C due to it having a different functional form than LPG, which motivated the original choice of A2C. | null | null | null | null | null | null |
Open Compound Domain Adaptation with Object Style Compensation for Semantic Segmentation | Accept (poster) | Summary: This paper strives to capture various category-level object style information and then compensate the style information of the object instances from the target to the source domain in the open compound domain adaptation for semantic segmentation. And they evaluate the proposed method on various source and target datasets to verify the robustness and universality.
Strengths: Overall, the writing is well-organized and easy to follow. And the strengths can be summarized as follows:
1. The proposed approach to handling the open compound domain adaptation task from a novel category-level style compensation perspective is interesting and will bring some insights to the community.
2. The components of discrepancy memorization and style compensation are convincing. And the authors conducted comprehensive comparison experiments as well as ablation studies among various benchmarks.
Weaknesses: There are some weaknesses listed as follows:
1. You raise a valid point regarding the terminology "Instance-Key" which may not be appropriate in the context of semantic segmentation tasks where instance-level annotations are not available.
2. You suggest merging the content in Table 7 or moving some sub-tables to the supplementary material to improve the readability and aesthetics of the paper's layout.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: See weaknesses.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations: None
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: >### 1. You raise a valid point regarding the terminology "Instance-Key" which may not be appropriate in the context of semantic segmentation tasks where instance-level annotations are not available.
Please see our discussion in the second part of “Response to Common Questions”. The term “instance” of “instance-key feature” does not strictly point to an object instance. An instance-key feature is updated by the same category of object features, which are extracted from different images. The object features from different images naturally represent various object instances. Thus, the instance-key feature is named as such to indicate it is sensitive to the object instances. More specifically, the instance-key feature summarizes the same category’s object instances with similar visual styles. Consequently, an instance-key feature captures one of the representative styles, which widely appear with the object instances of the same category. We plan to revise the “instance-key feature” to the “representative-key feature” to fix the inappropriate name.
>### 2. You suggest merging the content in Table 7 or moving some sub-tables to the supplementary material to improve the readability and aesthetics of the paper's layout.
Thanks for your valuable comment. As the below tables, we plan to merge Table 7(a-b), (c-d), (e-f), respectively, which hopefully improve the paper layout.
**Table(a-b): Train: GTA5(Source)/SYNTHIA(Source), C-Driving(Target). Test: C-Driving(Target).**
|||GTA5(S)|SYNTHIA(S)|SYNTHIA(S)|
|----|:----:|:----:|:----:|:----:|
|Method|Type|mIoU(T)|mIoU$^{16}$(T)|mIoU$^{11}$(T)|
|Source-only|-|28.3|20.9|28.1|
|CDAS|OCDA|31.4|25.3|34.0|
|CSFU|OCDA|34.9|26.1|34.8|
|DACS|UDA|36.6|28.1|36.5|
|DHA|OCDA|37.1|29.9|37.6|
|AST|OCDA|38.8|31.1|38.9|
|ML-BPM|OCDA|40.2|32.1|40.0|
|**Ours**|OCDA|**44.1**|**35.6**|**43.7**|
**Table(c-d): Train: GTA5(Source)/SYNTHIA(Source), C-Driving(Target). Test: C-Driving(Open), Cityscapes(Open), KITTI(Open), WildDash(Open).**
|||||GTA5(S)|||||SYNTHIA(S)|||
|----|:----:|:----:|:----:|:----:|:----:|:----:|:----:|:----:|:----:|:----:|:----:|
|Method|Type|CD|CS|KT|WD|Avg|CD|CS|KT|WD|Avg|
|CSFU|OCDA|38.9|38.6|37.9|29.1|36.1|36.2|34.9|32.4|27.6|32.8|
|DACS|UDA|39.7|37.0|40.2|30.7|36.9|36.8|37.0|37.4|28.8|35.0|
|RobustNet|DG|38.1|38.3|40.5|30.8|37.0|37.1|38.3|40.1|29.6|36.3|
|DHA|OCDA|39.4|38.8|40.1|30.9|37.5|38.9|38.0|40.6|30.0|36.9|
|AST|OCDA|40.7|40.3|41.9|32.2|38.8|40.5|39.8|41.6|30.7|38.2|
|ML-BPM|OCDA|42.5|41.7|44.3|34.6|40.8|42.6|41.1|43.4|30.9|39.5|
|**Ours**|OCDA|**46.9**|**43.6**|**46.5**|**40.1**|**44.3**|**48.5**|**48.0**|**51.3**|**39.6**|**46.9**|
**Table(e-f): Train: GTA5(Source)/SYNTHIA(Source), ACDC(Target). Test: ACDC(Target and Open).**
|||GTA5(S)|GTA5(S)|SYNTHIA(S)|SYNTHIA(S)|
|----|:----:|:----:|:----:|:----:|:----:|
|Method|Type|mIoU(T)|mIoU(O)|mIoU$^{16}$(T)|mIoU$^{16}$(O)|
|Source-only|-|20.5|27.1|19.8|20.5|
|CDAS|OCDA|25.3|29.1|25.9|23.3|
|CSFU|OCDA|27.6|30.5|26.7|24.8|
|DACS|UDA|29.0|34.8|28.3|27.0|
|DHA|OCDA|29.5|37.5|29.2|27.3|
|AST|OCDA|30.7|39.2|30.1|27.9|
|ML-BPM|OCDA|32.1|41.6|31.9|29.1|
|**Ours**|OCDA|**35.7**|**44.1**|**34.7**|**36.4**|
---
Rebuttal Comment 1.1:
Comment: Thanks to the author for solving most of my concerns. I choose to keep my score as borderline accept.
---
Reply to Comment 1.1.1:
Title: Thanks for Your Review
Comment: Dear Reviewer CD2C,
Thank you again for your review. We are pleased to see that the questions raised by you are solved.
Best,
Authors of Paper ID 10143 | Summary: This paper deals with the open compound domain adaptive semantic segmentation problem. The motivation is to compensate the object style gap across domains to obtain more accurate pseudo labels for self-training. The main framework consists of two parts: discrepancy memory and style compensation. For discrepancy memory, the aim is to memorize the category-level and instance-level feature discrepancy between target and source features. For style compensation, the aim is to compensate the style variation from source to target to obtain high-quality pseudo labels to facilitate the subsequent self-training procedure. Finally, two cross-entropy loss terms with ground-truth source annotations and target pseudo labels respectively are minimized to update the segmentation network. Experiment results show that the proposed method not only improves the segmentation performance on the target domain, but also yields better segmentation results on the open domain, compared to previous methods.
Strengths: - The idea is simple and straightforward.
- The paper is generally easy to follow and understand.
- Ablations of various alternative solutions are conducted.
Weaknesses: - Some details are not clearly stated. For example, what is the intermediate segmentation head? From the start to the end of the paper, I don't find a formal definition of the open compound domain adaptive semantic segmentation task. A clear definition of the task may help the reader to better understand the proposed method.
- The proposed method involves many hyper-parameters, e..g., the $\lambda$ in Eq. (1); the $\gamma$ in Eq. (2) and Eq. (4); the $K$ in Eq. (3), etc. As we know, for a domain adaptation problem, it is hard to perform the hyper-parameter selection. I just doubt how those hyper-parameters affect the final performance. The authors should provide empirical studies and analysis.
- For the Instance-Key and Discrepancy Features, they are instance-level (see Eq. (2)). I just doubt if such method can be scalable to large-scale datasets, as the $m$ may be super large (there are large numbers of pixels within one specific category).
- For the ablation part, I suggest the authors use equations along with text to better describe each alternative solutions. Only with text descriptions, it is not easy for the reader to accurately capture the meaning that the authors want to express.
- Some necessary ablations are missing. For example, in Eq. (7), how about moving the first term and just using the last term? In Eq. (2) and Eq. (3), how about not using the weighting mechanism, instead simply using the average? For Eq. (1), how about simply averaging features without $\lambda$?
- For Eq. (1), it actually may change the scale of feature values as time goes on. I just doubt why this could work. Maybe moving average is more suitable.
- From the experiment results, I see the mIoU for the open domain is generally higher than the target domain, though open domain images are not utilized during training. Do the authors have any explanations?
- From a high-level, I doubt if this way can really help the segmentation on the open domain, as the compensation is designed for the target, i.e., based on the distribution or statistics of target images or features. How can such compensation generalize to an unseen domain?
- The paper should be polished further to make the descriptions more professional and accurate.
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: See the weakness part.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 2 fair
Limitations: I don't see serious societal issues.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: >### 1. Some details are not stated, like the intermediate segmentation head and the definition of OCDA for semantic segmentation.
Figure 2 of the rebuttal file illustrates the intermediate head as the convolutional layers. This head takes input as the target feature for regressing the category score map.
The existing works (i.e., ML-BPM [ECCV 2022], CSFU [CVPR2021], and DHA [NeurlPS2020]) have defined OCDA for semantic segmentation. In training, the segmentation network learns from the source domain, where the images have pixel-level annotations. The network also learns from the target images with different styles from the source images. The target images are given without annotations. Thus their pseudo annotations should be regressed for the network training. In testing, the segmentation network is evaluated on the open domain images with styles unseen in training. We will add the definition to the paper.
>### 2. The method involves many hyper-parameters, which are hard to select. How do they affect the performance?
In Section 2.2 of the supplementary file, we have studied the hyper-parameters (i.e., the memory capacity $M$ in Eq. (2), the ratio $\lambda$ in Eq. (1), the score threshold $\gamma$ in Eqs. (2) and (4), the feature set number $K$ in Eq. (3)). The performances in Figures 1-5 of the supplementary file guide us to select the hyper-parameters, which leads to the best results on the C-Driving dataset are used for evaluating our method on the Cityscapes, KITTI, WildDash, and ACDC datasets. Though it is hard to select hyper-parameters, our approach with robustness uses the same set of hyper-parameters to outperform other methods on various datasets.
>### 3. For the Instance-Key and Discrepancy Features, they are instance-level (see Eq. (2)). Can such a method be scalable to large-scale datasets, as the $m$ may be super large?
“$m=1,…,M$” is the index of the instance-key feature, which captures one of the representative styles appearing with the same category's instances. “$M=50$” already leads to satisfactory performances on various datasets (see the second part of “Response to Common Questions”).
>### 4. The authors should use equations with text to describe each ablation alternative.
Thanks. We plan to add the equations with Tables 2-6 as below.
w/o OLDM: w/o Eqs. (1)(2)(3)
global(T2), 100% update(T3), multi-sets, category(T4), instance similarity(T5), final(T6): w/ all Eqs. in paper
T2
mean instances$$A_l = \frac{1}{M} \sum_m N_{l,m}$$
local$$A_l = \frac{1}{bs} \sum_{bs} F_s(x,y)$$
T3
mean discrepancy$$D_{l,m} \leftarrow \frac{N\cdot D_{l,m} + (A_l - F_t(x,y))}{N+1},\widetilde{F_t}(x,y) = F_t(x,y) + \sum_k \sum_mD_{k,m}$$
discrepancy similarity$$w_{l,m} = \frac{D_{l,m} \cdot F_t(x,y)}{\sqrt{C}}$$
top-1 update$$m = argmax \\{w_{l,m}\\}, w_{l,m}=\frac{N_{l,m} \cdot F_t(x,y)}{\sqrt{C}}$$
top-50% update$$m\in max_{\frac{M}{2}}\\{w_{l,m}\\}, w_{l,m}~~as~above$$
T4
key discrepancy$$D_{l,m} = A_l - N_{l,m}$$
merged sets$$A \leftarrow A + \lambda F_s(x,y),N_m \leftarrow N_m + w_mF_t(x,y),D_m \leftarrow D_m + w_m(A-F_t(x,y)),w_m = \frac{N_{m} \cdot F_t(x,y)}{\sqrt{C}}$$
multi-sets, k-means$$C_i=\\{D_m | d(D_m, c_k)\ge d(D_m,c_i)~~for~k\neq i\\}, c_i=\frac{1}{|C_i|}\sum_{D_m\in C_i}D_m$$
T5
mean discrepancy$$\widetilde{F_t}(x,y) = F_t(x,y) + \sum_mD_{k,m}$$
T6
w/o pseudo: w/o Eq. (4)
intermediate: replace $\widetilde{{\bf R}}$ with ${\bf R}$ in Eq. (4)
>### 5. Ablations are missing. Eq. (7) w/o the first term? Eqs. (2-3) w/o weighting but the average? Eq. (1) w/o $\lambda$ but the averaging?
By removing the first term of Eq. (7), we turn off the intermediate segmentation head in training, missing a chance to update the shallow object feature map $F_t$. It degrades the result on the C-Driving dataset from 44.1 to 38.4 mIoU.
In Eqs. (2-3), the weights measure the similarities between the object and instance-key features. They re-weight the discrepancy features to compensate for the object features of the target/open domain to the source domain. By replacing the weighting with the moving average, Eq. (3) fails to select the appropriate discrepancy features, degrading the result from 44.1 to 40.7 mIoU. Please also see the first part of “Response to Common Questions”.
In Eq. (1), we make little difference to the result by replacing $\lambda$ with the moving average (44.1 v.s. 43.9 mIoU). Empirically, $\lambda=0.001$ prevents the outlier object features of the source domain from primarily impacting the category-key features. It lets the category-key features converge faster than the moving average (250K v.s. 350K iterations).
>### 6. Eq. (1) may change the scale of features as time goes on. Why this work? The moving average is more suitable.
Please see the last paragraph of our response to your fifth concern.
>### 7. mIoU for open domain is generally higher than the target domain, though the open domain is not for training. Any explanation?
With the source domain of GTA5, better performances on the open domain of ACDC than the target domain have been found in the many methods (e.g., CDAS [CVPR 2020], AST [AAAI 2022], and ML-BPM [ECCV 2022] in Table 7(e)). Though this comparison between distinct domains with different images is unfair and out of the evaluation scope, this is partially because the open domain is near the source domain, where the features are easily transformed to. With the source domain of SYNTHIA far away from the open domains, this phenomenon is generally not found. Even so, our method excellently transforms the open object features to the source domain, yielding better performances than the target domain.
>### 8. If this way can help the segmentation in the open domain, as the compensation is designed based on the distribution or statistics of target images or features? How can it generalize to an unseen domain?
Please see the second part of “Response to Common Questions”.
>### 9. The paper should be polished.
Thanks. We will refine our paper.
---
Rebuttal Comment 1.1:
Comment: Thanks for the authors' effort for providing detailed response. After reading the response, I still feel confused about the reason why the compensation designed based on the distribution or statistics of target images or features can generalize to unseen domains.
---
Reply to Comment 1.1.1:
Title: Thanks for your response
Comment: Thanks for your response! Please note that we provide the reason why the memory for storing the discrepancy features can help the generalization to unseen domain. Below, we provide a summary for you to better understand it.
The compensation designed based on the distribution or statistics of target images or features mentioned by you means using the memory to store the discrepancy features for compensating the target images to the source domain. Actually, the discrepancy features represent the difference between the source and target images, rather than representing the target images only. Thus, the discrepancy features build the **clear association** between the target and source images. The advantage of the discrepancy features for generalization to the unseen domain is two-fold:
- **A Better Representation of Category-level Style Difference across Target and Source Domains**. With a **clear association** between the source and target images, the discrepancy features can represent the difference between the object styles of the same category but across the source and target domains. This can be easily done because images have pixel-wise annotations of semantic categories. It means the discrepancy features are specified to the corresponding categories, thus representing the category-level style difference across source and target domains.
- **A More Generalized Representation of Category-level Style Difference across Unseen and Source Domains**. Given an object also in the same category to target/source domain but in an unseen domain, we can use its feature to compute the similarities with the object features of target images. Note that these similarities also have a **clear association** with the discrepancy features, because they are computed based on the target and unseen object features. They can re-weight the associated discrepancy features, forming the **new discrepancy feature** to compensate the unseen style, which is transformed to the source domain. **Please note that the new discrepancy feature is the core for the generalization to the unseen domain specified to the category**.
**Simply put, the success of our method stems from using the discrepancy features attending to the semantic categories, which build the clear association between the style difference across target-source domains, and the similarity between unseen-target domains, finally forming the new and generalized discrepancy feature to transform a category’s style of the unseen domain to the source domain**. Please check the visualized process of our method in Figure 1(b) of our rebuttal file (https://openreview.net/attachment?id=LOvUNcunkJ&name=pdf). In comparison, the existing methods trivially depend on the parameters of the deep network to transform the image features across different domains, failing to achieve the above success. This is because the network contains many layers of parameters. How can the parameters be associated to each semantic category, or to unknown number of target and unseen domains in different layers? How can we re-weight and re-combine these parameters without an explicit association between the parameters and the target domains? Without properly addressing these problems, the existing methods cannot achieve the **clear association** between the **category-level style difference** across target-source domains, and the similarity between unseen-target domains, thus showing worse generalization to the unseen domain than our method.
---
Rebuttal 2:
Title: Sincerely Request Your New Comment
Comment: Dear Reviewer okqW,
We thank you again for your valuable comments, which significantly help us to polish our paper. We are looking to discussing with you the questions that are addressed unsatisfactorily.
Best,
Authors of Paper ID 10143
---
Rebuttal Comment 2.1:
Title: Sincerely Request Your New Comment Again
Comment: Dear Reviewer okqW,
Again, please allow us to extend our sincere thanks to you, for your time and efforts of reviewing our paper. As the deadline for the authors' response is approaching, we sincerely request your comment on our primary response. This will definitely give us a valuable chance to address the questions unsolved.
Best,
Authors of Paper ID 10143
---
Rebuttal 3:
Comment: Dear Reviewer okqW,
Could you give a quick feedback to authors' rebuttal? Thank you!
Best regards,
AC | Summary: The paper introduces Object Style Compensation which involves constructing an Object-Level Discrepancy Memory consisting of multiple sets of discrepancy features to minimize the style differences between the source and target domains and ensure the styles of different object categories or instances within the scene to be adapted well. In particular, the paper learns these discrepancy features from images of both domains and stores them in memory. By leveraging this memory, appropriate discrepancy features are selected to compensate for the style information of object instances across various categories, aligning their styles to a unified style of the source domain. This method enables more accurate computation of pseudo annotations for target domain images and achieves state-of-the-art results on different datasets.
Strengths: - Overall the paper is well organized and easy to read.
- The idea of using the object-level discrepancy memory for domain adaptation is interesting.
- Achieved better semantic segmentation performance than SOTA methods.
Weaknesses: - What's the value of the parameter $\gamma$ set in Eqn (2) and (4), and how to determine its value? I would like to suggest the authors to run a group of experiments with different values of $\gamma$ to see how the value impacts the final performance.
- The implementation detail is missing in the main paper. I would like to suggest the authors move this part from the supplementary to the main paper so that the reader can well understand and even reproduce the results if necessary.
- The current paper still misses one important baseline, i.e., using the category-level discrepancy rather than the instance-level.
- For the visualization part, the visualization in the current paper is still very simple. I am curious whether the current method can handle the complicated scenes (e.g., with overlap and occlusion between objects with same or different categories).
- The authors didn’t provide the failure cases and the corresponding analysis in the current paper.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. What’s the runtime for the current method?
2. What are the failure cases?
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The author didn’t discuss the limitations of the current proposed method.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: >### 1. What's the value of the parameter $\gamma$ set in Eqn (2) and (4), and how to determine its value? I would like to suggest the authors to run a group of experiments with different values of $\gamma$ to see how the value impacts the final performance.
Thanks. We have provided this analysis in Section 2.2 “Sensitivity Analysis of score threshold \gamma” of the supplementary material. Figures 3 and 4 of the supplemental material report the performances with different values of \gamma on the C-Driving dataset. The $\gamma$, which leads to the best performance on the C-Driving dataset, is finally used for evaluating our method on different datasets.
>### 2. The implementation detail is missing in the main paper. I would like to suggest the authors move this part from the supplementary to the main paper so that the reader can well understand and even reproduce the results if necessary.
Thanks. We will move the necessary details to the main paper according to your suggestion.
>### 3. The current paper still misses one important baseline, i.e., using the category-level discrepancy rather than the instance-level.
Thanks. The table below compares the performances achieved by the category- and level-level discrepancy features. Compared to the baseline without style compensation (“w/o OLDM”), the category-level discrepancy features (“category-level discrepancy”) help to transform the target and open domains’ object features to the source domain, finally yielding better performances. However, the category-level discrepancy features insufficiently account for the style difference between the object instances of the target/open and the source domains. Thus, their performances lag behind our method with the instance-level discrepancy features.
| method | mIoU(Target) | mIoU(Open) |
| :-: | :-: | :-: |
| w/o OLDM | 36.6 | 39.7 |
| category-level discrepancy | 38.1 | 40.2 |
| Ours | 44.1 | 46.9 |
|
>### 4. For the visualization part, the visualization in the current paper is still very simple. I am curious whether the current method can handle the complicated scenes (e.g., with overlap and occlusion between objects with same or different categories).
Thanks. In Figure 3 of the rebuttal file, we zoom in on some regions from the segmentation results. Our method of object-level style compensation yields satisfactory segmentation results in complicated scenes where the objects are overlapped or occluded. We will add these results to the main paper and supplementary file.
>### 5. The authors didn’t provide the failure cases and the corresponding analysis in the current paper.
We respectfully clarify that the discussion on the failure cases has been provided in Section 3.1 of the supplementary file. In this discussion, images with adverse weather conditions lead to unsatisfactory results. The failure cases also appear when the images of the open domains exhibit significant deviations from the source domain. These factors make it challenging to select the appropriate discrepancy features from OLDM. Though our method outperforms other methods, we will investigate how to reduce failure cases in challenging scenarios.
>### 6. What’s the runtime for the current method?
In Figure 1(c) of the supplementary file, we study the testing time of our method by changing the capacity of OLDM. With the default capacity (M=50), we need 0.380 seconds on average to process a 1280x720 image. We also compare the testing time of our method with other methods in the table below. Our method achieves better performance at the cost of reasonable testing times (second per each 1280x720 image).
| method | source-only | CDAS | CSFU | DACS | RobustNet | DHA | AST | ML-BPM | Ours |
| :-: | :-: | :-: | :-: | :-: | :-: | :-: | :-: | :-: | :-: |
| running time |0.345 |0.387 |0.383 |0.364 |0.378 |0.397 |0.411 |0.402 |0.380 |
>### 7. The author didn’t discuss the limitations of the current proposed method.
Please see Section 3 of the supplementary file, where we have analyzed the limitations of our approach. This analysis comprises of two aspects: "Failure Cases" and "Evaluation of OLDM in Cross-Dataset Scenarios".
---
Rebuttal 2:
Title: Sincerely Request Your New Comment
Comment: Dear Reviewer XmuL,
We thank you again for your valuable comments, which significantly help us to polish our paper. We are looking to discussing with you the questions that are addressed unsatisfactorily.
Best,
Authors of Paper ID 10143
---
Rebuttal Comment 2.1:
Title: Sincerely Request Your New Comment Again
Comment: Dear Reviewer XmuL,
Again, please allow us to extend our sincere thanks to you, for your time and efforts of reviewing our paper. As the deadline for the authors' response is approaching, we sincerely request your comment on our primary response. This will definitely give us a valuable chance to address the questions unsolved.
Best,
Authors of Paper ID 10143 | Summary: The authors propose a novel target-to-source feature style transfer approach for open compound domain adaptation. Inspired by the observation of the existence of object style discrepancy when converting a target image to a source style, they design Object-Level Discrepancy Memory (OLDM). This module saves the category-key features of the source domain, instance-key features of the target domain, and discrepancy features between source and target domains. When performing style transfer from target features to source features, they compensate for the object inconsistency using discrepancy features specific to each object. They evaluate their method on various driving datasets in semantic segmentation.
Strengths: Their motivation is good and intuitive. I agree that there will be a certain degree of appropriate object style for each domain.
It appears that well-designed modules are in place based on motivation.
Weaknesses: In the related work section, there is a simple listing of various papers, and the flow of each research is not well organized. Furthermore, there is little mention of papers published after 2021, which seems to indicate a lack of investigation into recent papers.
Based on the proposed method alone, it seems that it is not specifically designed for open compound domain adaptation but rather for multi-target domain adaptation. The difference between the two tasks lies in whether there is a consideration for how to generalize to unseen domains, which does not appear to be addressed in this paper. The proposed target-to-source feature style transfer method is suitable for multi-target domain adaptation as it aims to obtain more accurate pseudo labels for each target domain. [1] presents a concept of transferring source features into various styles to achieve domain generalization. By incorporating additional well-designed techniques such as source feature style randomization, it can be considered a suitable method for open compound domain adaptation.
This research focuses on semantic segmentation, but the method requires instance masks. However, the paper does not explain how to obtain instance masks.
Minor issues.
- The content in Figure 2 seems to be sufficiently understandable from Figure 1-(c). It would be better to combine Figures 2 and 3 appropriately and allocate more space to the related work section.
- There are three duplicated reference numbers of '23' on line 65."
[1] WildNet: Learning Domain Generalized Semantic Segmentation from the Wild (CVPR'22)
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: As I mentioned in the weaknesses section, this work seems to be most related to multi-target domain adaptation. Could the authors explain this research from the perspective of domain generalization?
While proposing source-to-target feature style transfer in addition could potentially offer performance advantages, what is the reason behind only proposing the target-to-source approach?
This method seems to require instance masks, but how do we obtain instance masks from a dataset that only has semantic masks?
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: They address the limitations of their work in supplementary materials.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: >### 1. In the related work section, there is a simple listing of various papers, and the flow of each research is not well organized. Furthermore, there is little mention of papers published after 2021, which seems to indicate a lack of investigation into recent papers.
Thanks. We have added 14 publications since 2021 to the related work. We plan to reorganize the related work as below.
In the flow of "domain adaptation for semantic segmentation", the unsupervised domain adaptation (UDA) and multi-target domain adaptation train networks through the style regression (SMPPM [AAAI2022]), domain adversarial (DWL [CVPR2021], CCL [CVPR2021]), and self-training (FixBi [CVPR2021], MTKT [ICCV2021]). In contrast, the domain generalization (DG) methods involve learning domain invariant representation (T3A [NeurIPS2021], COMEN [CVPR2022], PCL [CVPR2022]) and data augmentation (L2D[ICCV2021], MBDG[NeurIPS2021], StyleNeophile[CVPR2022]). Regarding various methodologies for open compound domain adaptation (OCDA), they conduct the style regression based on scene-level (DHA [NeurIPS2020], CSFU [CVPR2021], AST [AAAI2022], ML-BPM [ECCV2022]) and category-level (CDAS [CVPR2020]) attributes.
In the flow of "deep network with memorization for visual understanding", the memorization technique is demonstrated in the context of image (CICM [NeurIPS2022], ATDOC [CVPR2021], PinMemory [CVPR2022]) and video (HF2-VAD [ICCV2021], XMem [ECCV2022]) tasks. Additionally, based on the granularity at which objects are stored in memory, these methods can be categorized at either the scene level (MOSS [CVPR2021], CICM [NeurIPS2022], XMem [ECCV2022]) or the category level (ATDOC [CVPR2021], PinMemory[CVPR2022]).
>### 2. The proposed method is not specifically designed for open compound domain adaptation but rather for multi-target domain adaptation. How to generalize to unseen domains is not addressed.
Our method of style compensation is specifically designed to transform the object features of the target and open domains to the source domain. In the first part of “Response to Common Questions”, we explain why our method can be generalized better to address the image segmentation on the unseen domains than the popular methods.
>### 3. WildNet presents a concept of transferring source features into various styles to achieve domain generalization. By incorporating additional well-designed techniques such as source feature style randomization, it can be considered a suitable method for open compound domain adaptation.
Thanks for pointing out this relevant work, which will be added to our paper for discussion. WildNet borrows the object features of the wild domain, which is usually provided by large-scale datasets (e.g., ImageNet), to enrich the object features of the source domain. WildNet extends the source domain, where the object features can cover the unseen domains. Though this extension is effective, it is infeasible to cover all unseen domains exhaustively.
In contrast to WildNet, we follow another research direction of addressing OCDA. We store the discrepancy features between the source and target domains for style compensation, which can be reasonably generalized to transform the object features of the open domain to the source domain. Note that these works along different directions can work together to achieve better performances. Intuitively, the object features extended by WildNet belong to the target domains newly created. We add these extended features to the object features in the target domains in the experimental datasets. Next, we trivially compute the discrepancy features based on the object features of the source and target domains, where the discrepancy features are used for compensating the features of the open domain. In the table below, we compare WildNet, style compensation, and their combination, where the last method achieves the best performances on different datasets.
| method | type | C-Driving(Open) | ACDC(Open) | Cityscapes(Open) | KITTI(Open) | WildDash(Open) |
| - | :-: | :-: | :-: | :-: | :-: | :-: |
| WildNet | DG | 42.3 | 40.8 | 40.7 | 41.5 | 34.2 |
| Ours | OCDA | 46.9 | 44.1 | 43.6 | 46.5 | 40.1 |
| Ours+WildNet | OCDA | 48.3 | 45.8 | 44.1 | 47.4 | 42.7 |
>### 4. This method seems to require instance masks, but how do we obtain instance masks from a dataset that only has semantic masks?
We clarify that instance masks are unnecessary in our method. Please see the second part of “Response to Common Questions”. We plan to revise the “instance-key feature” to the “representative-key feature” to fix the inappropriate name.
>### 5. The content in Figure 2 seems to be sufficiently understandable from Figure 1(c). It would be better to combine Figures 2 and 3 appropriately and allocate more space to the related work section.
Thanks. Figure 1(c) shows the importance of considering the style difference between different object categories and instances. It provides a high-level motivation for proposing the style compensation of different categories and instances. But it contains little technical information. Thus we need Figures 2 and 3 to provide technical details, We agree that Figures 2 and 3, whose redundant elements can be trimmed to better understand the technical pipeline. Please see the revised and merged figures in Figure 2 of the rebuttal file.
>### 6. There are three duplicated reference numbers of '23' on line 65."
Thanks. We have fixed the duplication.
>### 7. What is the reason behind only proposing the target-to-source approach?
With the target-to-source compensation, we transform the object features of the target domains to have a similar style to the source domain. Consequently, the segmentation network only needs to learn from the visual styles of the source domain, producing reliable pseudo labels for the images of the target domains with more variant styles. This advantage cannot be achieved by conducting source-to-target compensation.
---
Rebuttal Comment 1.1:
Comment: The authors' thorough responses are greatly appreciated. As most of my concerns are resolved, I would raise my score to "Borderline accept".
---
Reply to Comment 1.1.1:
Title: Thanks for Your Review
Comment: Dear Reviewer uGsB,
Thank you again for your valuable comments. We are pleased to see that the questions raised by you are solved and the score is promoted by you to "Borderline accept". **Please kindly allow us to remind that the score is changed via the syetem entrance of "Rating", which will affect the final decision.**
Best,
Authors of Paper ID 10143
---
Rebuttal 2:
Title: Sincerely Request Your New Comment
Comment: Dear Reviewer uGsB,
We thank you again for your valuable comments, which significantly help us to polish our paper. We are looking to discussing with you the questions that are addressed unsatisfactorily.
Best,
Authors of Paper ID 10143 | Rebuttal 1:
Rebuttal: ## Response to Common Questions
>### **1. The advantage of style compensation based on the discrepancy features**
>- Reviewer LZYW-Q2 “Are there any advantages to storing these discrepancies rather than just performing style regression?”
>- Reviewer uGsB-Q2 “The proposed method is not specifically designed for open compound domain adaptation but rather for multi-target domain adaptation. How to generalize to unseen domains is not addressed.”
>- Reviewer okqW-Q8 “From a high-level, I doubt if this way can really help the segmentation on the open domain, as the compensation is designed for the target, i.e., based on the distribution or statistics of target images or features. How can such compensation generalize to an unseen domain?”
Below, we explain the advantage of our method of style compensation, which specifically transforms the object features of the target and open domains to the source domain. In Figure 1 of the rebuttal file, we conceptually compare the style compensation with the typical OCDA methods (e.g., ML-BPM [ECCV 2022], AST [AAAI 2022], CSFU [CVPR2021], DHA [NeurlPS2020], and CDAS [CVPR2020]) that optimize the parameters of the deep network to transform the object features of the source domain to the target domain, or vice versa.
*[Feature Transformation by Network Parameters]*
In Figure 1(a), we illustrate the source and target object features as the dots and crosses of different categories spanning over the latent space. The dot and cross of the same category represent distinct instances. The source and target features belong to separate clusters (the yellow as the source domain, the blue and red as the target domains) that indicate the source and target domains with distinct styles. The typical OCDA methods learn deep network parameters. The parameters (the dash arrows) transform the target object features into the source domain.
The parameters are learned from the source and target images in the training set. But they cannot trivially transform the open domain's object features (see the dots and crosses in the green cluster) that contain many unseen styles. Using the learned parameters directly may transform the object features of the open domain to the latent sub-space misaligned with the source domain. A remedy is adaptively re-weighting and re-combining the learned parameters, which are generalized as the new parameters (denoted as the green arrow) to transform the open domain features. But the network contains many layers of parameters. How can the parameters correspond to the unknown number of target and open domains in different layers? How can we re-weight and re-combine these parameters without an explicit correspondence between the parameters and the target domains? These problems remain to be challenging. Moreover, we should not only transform the image-level features of the entire scenes but also the features of different object categories and instances, as we have discussed in the introduction of the main paper (see lines 30-36). This goal is hard to achieve without effectively generalizing the learned parameters.
*[Advantage of Style Compensation]*
Rather than using the network parameters to transform the target/open domain's features to the source domain, we learn the discrepancy features representing the style differences across domains. Figure 1(b) shows that we learn and store the discrepancy features. The discrepancy features (denoted as the solid blue and red arrows) explicitly correspond to different object categories and representative styles of instances. The stored discrepancy features are flexibly re-weighted and re-combined, based on the similarity between object features of target and open domains (see the weights $w_1$ and $w_2$ computed in Eq. (3)). In this way, we generalize the re-weighted discrepancy features. A new discrepancy feature (denoted as the green arrow) transforms the object features of the open domain to the source domain (see the formulation in Eq. (3) of the main paper). In Figure 1(c) of the rebuttal file, we visualize some of the object features of the open domain, which undergo style compensation or not. With the style compensation powered by the discrepancy features, we better transform the open domain’s object features into the source domain. In Table 7(c-f) of the main paper, we have evaluated our method on the target and open domains of different datasets, where our method outperforms other methods.
>### **2. Instance masks**
>- Reviewer LZYW-Q1 “It's unclear how different instances within the same class are differentiated.”
>- Reviewer uGsB-Q4 “The paper does not explain how to obtain instance masks.”
>- Reviewer okqW-Q3 “For the Instance-Key and Discrepancy Features, they are instance-level (see Eq. (2)). I just doubt if such method can be scalable to large-scale datasets, as the m may be super large (there are large numbers of pixels within one specific category).”
>- Reviewer CD2C-Q1 “You raise a valid point regarding the terminology Instance-Key which may not be appropriate in the context of semantic segmentation tasks where instance-level annotations are not available.”
We clarify that instance masks are unnecessary in our method. The term “instance” of “instance-key feature” does not strictly point to an object instance. An instance-key feature is updated by the same category of object features, which are extracted from different images. The object features from different images naturally represent various object instances. Thus, the instance-key feature is named as such to indicate it is sensitive to the object instances. More specifically, the instance-key feature summarizes the same category’s object instances with similar visual styles. Consequently, an instance-key feature captures one of the representative styles, which widely appear with the object instances of the same category. We plan to revise the “instance-key feature” to the “representative-key feature” to fix the inappropriate name.
Pdf: /pdf/954cf44b9d2faa21dde985f6c976b8fdc9c5acd4.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: This paper introduces a new method for OCDA called Object Style Compensation, which focuses on adapting the style changes of different categories or instances of objects rather than just the overall scene style. This stored information is used to select the appropriate discrepancy features for compensating the style information of object instances. Then, object features are compensated using selected discrepancy feature. The authors construct the Object-Level Discrepancy Memory by storing multiple sets of discrepancy features. The proposed method achieves SOTA on OCDA benchmarks.
Strengths: - Using the style of objects is both ideal and intriguing.
- Experiments show the effectiveness of the proposed method.
Weaknesses: - How exactly are the objects obtained? It's understandable that categories can be distinguished by pseudo labels, but it's unclear how different instances within the same class are differentiated.
- Why is storing style differences important? Are there any advantages to storing these discrepancies rather than just performing style regression?
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: - The explanation on L110 indicates that 'l' represents the category, but there is no explanation about what 'm' signifies. Does 'm' mean the number of instances?
- The quality of the pseudo labels seems likely to influence Discrepancy Memorization. Is there a guarantee of a certain level of quality even in the initial stages of learning?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: There are no limitations to potential negative societal impact.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: >### 1. It's unclear how different instances within the same class are differentiated.
Thank you for pointing out this confusion between the instance-key feature and the object instance. In this paper, we do not need to differentiate the instances as those in the instance segmentation task. Please see the second part of “Response to Common Questions”. We plan to revise the “instance-key feature” to the “representative-key feature” to fix the inappropriate name.
>### 2. Are there any advantages to storing these discrepancies rather than just performing style regression?
We respectfully point out that the style regression (e.g., ML-BPM [ECCV 2022], AST [AAAI 2022], CSFU [CVPR2021], DHA [NeurlPS2020], and CDAS [CVPR2020]) unsatisfactorily addresses Open Compound Domain Adaptation (OCDA), where the styles of different object categories and instances should be appropriately transformed. It motivates us to store the features of the style discrepancies, providing a more explicit and flexible way of changing the styles at the category and instance levels. Please see our discussion in the first part of “Response to Common Questions”.
>### 3. There is no explanation about what “$m$” signifies.
We denote “$m=1,…,M$” as the index of the representative-key feature, where “$M$” is the total number of the representative styles of the instances in the same object category. “$M$” serves as a hyper-parameter. In Section 2.2 “Sensitivity Analysis of Memory Capacity” of the supplementary file, we have experimented with changing “M” and examining the effect on the segmentation performance. We will add the explanation to the main paper.
>### 4. Is there a guarantee of a certain level of quality even in the initial stages of learning?
To reduce the negative impact of low-quality pseudo labels on the Discrepancy Memorization, we pre-train the backbone segmentation network on the source images with the ground-truth segmentation masks (see Section 1 of the supplementary file). The pre-trained network yields more reliable pseudo labels, stabilizing the Discrepancy Memorization even in the initial stage.
>### 5. There are no limitations to potential negative societal impact.
Section 4 of the Supplementary Materials addresses the negative societal impacts. Our approach fosters comprehensive image analysis. However, the analysis may contain problematic information, possibly leading to the infringement of economic interests.
---
Rebuttal 2:
Title: Sincerely Request Your New Comment
Comment: Dear Reviewer LZYW,
We thank you again for your valuable comments, which significantly help us to polish our paper. We are looking to discussing with you the questions that are addressed unsatisfactorily.
Best,
Authors of Paper ID 10143
---
Rebuttal Comment 2.1:
Title: Sincerely Request Your New Comment Again
Comment: Dear Reviewer LZYW,
Again, please allow us to extend our sincere thanks to you, for your time and efforts of reviewing our paper. As the deadline for the authors' response is approaching, we sincerely request your comment on our primary response. This will definitely give us a valuable chance to address the questions unsolved.
Best,
Authors of Paper ID 10143 | null | null | null | null | null | null |
PDF: Point Diffusion Implicit Function for Large-scale Scene Neural Representation | Accept (poster) | Summary: This paper proposed a point cloud-based representation for neural rendering of large-scale scenes. Point cloud diffusion model is designed to upsample the point cloud for better performance. The proposed method achieves state-of-the-art performance in the tested scenes. However, I still have some concerns about the implementation details and the performance comparison.
Strengths: 1. The proposed method achieves state-of-the-art performance in the neural rendering of large-scale scenes.
2. Using diffusion models for point cloud upsampling is interesting and useful in the tested scenes.
3. The presentation and writing of the paper is good and the visualization comparison is clear.
Weaknesses: 1. Size of the train dataset for diffusion. Diffusion model typically requires large-scale dataset for better performance. I'm curious about the number of training point cloud samples for diffusion model.
2. Reconstruction quality. The 3D reconstruction quality of MVS methods like COLMAP is not guaranteed and there can exist large holes in texture-less regions (e.g., white walls in outdoor scenarios.) I wonder the performance of the diffusion model in such situations (as the diffusion model in the manuscript is mainly designed for upsampling, not completion).
3. Comparison with Point-NeRF. Point-NeRF is an obvious baseline for this method, which should be included in the experiments. The ablations in the experiments can be seen as a variantion of Point-NeRF. However, Point-NeRF include a point grown and pruning strategy, which can also densify the point cloud. It would be better to test the performance of this strategy in large-scale scenes, which has not been explored before.
4. Inference time. The proposed method exploit an explicit reconstruction of the scene, thus the empty space can be simply skipped for better efficiency. However, the complexity of the proposed method is not reported and some implementation details are missed (e.g., the number of sampled points)
5. Background features. Mip-NeRF 360 is utilized for background rendering. However, as shown in Fig.5, the background also contributes to fore-ground regions. It would be better to include an explanation for this.
6. Others. Diffusion models are widely used for generative models and their exsiting randomness in the generated results. However, the 3D reconstruction/the neural rendering should be deterministic. I wonder the influence of the randomness in this method. Besides, I'm also curious about the failure cases of the method.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Please refer to Weaknesses.
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Please refer to Weaknesses (especially 2. and 6.).
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate your approval of our idea and the detailed and insightful comments. Your concerns will be addressed in the following comments and the final version of our paper will be updated accordingly.
📝 **Q: Size of the train dataset for diffusion.**
💡 **A:** Our method requires a diffusion model to be trained on each scene, the number of training point cloud samples varies from 40,000 to 100,000 for different scenes.
📝 **Q: Reconstruction quality.**
💡 **A:** As you have expressed concerns, it is evident that there are missing parts in the point cloud reconstructed by COLMAP, as shown in Figure 9. During the diffusion training process, our training data pairs are obtained by subsampling the reconstructed point cloud twice, transforming a sparser point cloud into a denser one. Since the subsampling rate can approach zero, the training process encompasses cases of completing structural gaps. Therefore, our method is capable of addressing such situations, as depicted in Figure 9.
📝 **Q: Comparison with Point-NeRF.**
💡 **A:** Thank you for your suggestion. Point-NeRF employs a point growing and pruning strategy to generate a surface point cloud sufficient for rendering in small-scale scenes. However, this approach falls short when dealing with sparse and structurally complex point clouds in large-scale scenes. We present the results of Point-NeRF on the OMMO dataset, as shown in the table below.
|Method | PSNR↑ |SSIM↑ |LPIPS↓|
|:-:|:-:|:-:|:-:|
|Point-Nerf| 17.85| 0.57 |0.320|
|**Ours**| **25.10** |**0.79**|**0.205**|
📝 **Q: Inference time and implementation details.**
💡 **A:** Our method avoids ray sampling in the empty scene space by leveraging point clouds, so the inference time will be relatively reduced. We evaluate the training and rendering time of our method on the 5th scene (Sydney Opera House) from the OMMO dataset. At a resolution setting of 1280×676, our method takes about 38 hours to train and 12 seconds to render on a single Nvidia A100 GPU device. Referring to the implementation details of Point-NeRF, we set the number of sampling points on the sampling ray of each pixel to 80, and the features of each sampling point are aggregated from the features of 8 neural points around it.
📝 **Q: Reasons for the facilitative impact of the background on the foreground.**
💡 **A:** During the training process, the foreground and background are jointly optimized. If the background exhibits poor performance, it can lead to significant losses and adversely affect the optimization of the foreground.
📝 **Q: the influence of the randomness.**
💡 **A:** In order to explore the influence of this randomness on our method, we conducted experiments on three scenes(scan5, scan11, scan12) from the OMMO dataset. We set three different random number seeds for each scene, and the quantitative results are shown in the table below. The metrics show that the impact of different random number seeds on rendering quality is almost negligible. In addition, our method has no failure cases on the OMMO dataset, Figure 10 in the attached PDF shows the rendering results of all scenes.
|Seed | PSNR↑ |SSIM↑ |LPIPS↓|
|:-:|:-:|:-:|:-:|
|2| 28.60 | **0.90** |**0.136**|
|4| **28.63** | **0.90** |**0.136**|
|8| 28.42 | 0.89 |0.140 |
|**Ours**| 28.60 |**0.90** |0.137|
---
Rebuttal Comment 1.1:
Title: Response
Comment: I appreciate the detailed rebuttal.
As for the performance of Point-NeRF, have you also used background fusion for Point-NeRF (maybe a simple fusion with MipNeRF 360)? I observed that the performance of Point-NeRF is even worse than the version of yours w/o diffusion (according to the rebuttal for Reviewer yNkC).
---
Reply to Comment 1.1.1:
Title: Response to the Performance of Point-NeRF
Comment: Thank you for your insightful comments.
We sincerely apologize for the confusion. We present our evaluation results of Point-NeRF on the entire OMMO dataset and provide ablation experiments for Reviewer yNkC (due to time and computational constraints, all ablation experiments are evaluated on a single scene, the Sydney Opera House).
Since Point-NeRF can only reconstruct sparse foreground point clouds, it cannot be directly applied to large-scale outdoor scenes. The evaluation results on the entire OMMO dataset are as follows:
|Method | PSNR↑ |SSIM↑ |LPIPS↓|
|:-:|:-:|:-:|:-:|
|Point-NeRF| 17.85| 0.57 |0.320|
|**Ours**| **25.10** |**0.79**|**0.205**|
Furthermore, we provide a set of ablation studies to analyze the performance gains brought by our point super-resolution diffusion module and the background module. Among them, "w/o diffusion, w/o background" is the Point-NeRF method, and "w/o diffusion, w/ background" is Point-NeRF with background fusion. It can be observed that, compared to Point-NeRF, our point super-resolution diffusion module can bring significant gains by providing a dense surface point cloud (Point-NeRF vs. Our foreground).
|Method | Known as |PSNR↑ |SSIM↑ |LPIPS↓|
|:-:|:-:|:-:|:-:|:-:|
|w/o diffusion, w/o background| Point-NeRF |9.28| 0.51 |0.355|
|w/o diffusion, w/ background| Point-NeRF + background |21.05| 0.83 |0.219|
|w/ diffusion, w/o background| Our foreground |22.93| 0.78 |0.235|
|**w/ diffusion, w/ background**|**Ours**|**27.58** |**0.90**|**0.162**|
In the final manuscript, two tables will be included separately as Table 1 in the Performance Comparison section and as Table 3 in the Ablation Studies section, accompanied by detailed explanations to avoid confusion. | Summary: The paper presents a method for reconstructing large-scale scenes from multi-view images. Given the sparse point cloud computed from an MVS pipeline, the algorithm first employs a point cloud diffusion model to upsample the point cloud. The utilization of the diffusion model could effectively create a dense cloud with missing regions completed. Then, a Point-NeRF-style model is used to reconstruct the scene based on the upsampled point. Foreground and background renderings are fused via an additional fusion network, resulting in the final output. The algorithm is tested on OMMO and BlendMVS datasets. Compared to existing NeRF baselines, the proposed pipeline could limit the sampling region to the surface and achieves better reconstruction and rendering quality.
Strengths: 1. Using the diffusion model for point cloud upsampling and completion is novel. While previous methods perform point cloud upsampling or completion using VAE or GAN models, this is the first paper that leverages the power of diffusion models. One related work could be Luo et al. [a], yet the proposed algorithm treats the partial input cloud as a condition and hence is also able to generate an arbitrary number of points.
2. The results are compelling. Using the upsampled point cloud as the supporting domain of NeRF, the method could reconstruct a much larger scene with finer details. Both the qualitative and quantitative experiments demonstrate the effectiveness of the proposed approach.
[a] Luo, Shitong, and Wei Hu. "Diffusion probabilistic models for 3d point cloud generation." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2021.
Weaknesses: 1. While the use of the diffusion model for point cloud upsampling is novel, its combination with the subsequent NeRF model is very straightforward and detached. The NeRF model is only a slight modification from Point-NeRF, with the addition of the background MLP, and such a representation itself is not insightful enough to form an individual contribution.
2. Some descriptions regarding the method section and experiment section are not very clear. What is the training scheme of the diffusion model? (more details in the 'Question' section). What does '... as a feature $f_i$ of each sampling point $p_i$ to equip structural information' in Line 172 mean?
3. Ablation study using other point cloud upsampling methods is missing. Given the main contribution of the paper is the diffusion-based upsampling network, it is worth investigating if a diffusion-based backbone could surpass a traditional GAN-based backbone. The study could be conducted on a simple toy dataset.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: The main question arises from the training strategy as well as the model specifications of the diffusion model:
1. What is the model architecture of the denoiser? How many parameters are needed?
2. Which dataset is used to train the diffusion model? How many training samples are there? Is data augmentation applied?
3. How many points are generated from the diffusion model?
4. What is the mechanism that allows the diffusion model to complete missing regions (as shown in Fig. 5)? Are regions randomly masked out during the training stage to encourage the model to hallucinate missing contents?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Not applicable.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate your approval of our idea and the detailed and insightful comments. Your concerns will be addressed in the following comments and the final version of our paper will be updated accordingly.
📝 **Q: The combination of novel diffusion-based point cloud upsampling models and Point-NeRF-based NeRF models is relatively weak.**
💡 **A:** For unbounded, large-scale outdoor scenes, Point-NeRF-based NeRF models are unable to accurately represent and synthesize fine-grained textures due to the vast sampling space and excessively sparse point clouds, as seen in the first column of Figure 5. To overcome this critical issue, we propose a novel diffusion-based point cloud upsampling module that generates dense scene surfaces. By incorporating an explicit surface prior, we reduce the sampling space of the Point-NeRF-based NeRF model from an unbounded 3D urban-level scale to the scene surface. This effective combination of explicit and implicit representations is the key to the success of our method and provides an effective solution for large outdoor scenes. Furthermore, the effectiveness of large-scale point cloud upsampling is difficult to measure through visualization due to the high degree of object occlusion and overlap. Neural rendering provides a solution for evaluating this research.
📝 **Q: GAN-based point cloud upsampling approaches.**
💡 **A:** We appreciate your insightful suggestion. In conjunction with the recommendation from Reviewer H9co, we have replaced our diffusion-based method with a Generative Adversarial Network (GAN)-based point cloud up-sampling approach[1].
📝 **Q: What is the training scheme of the diffusion model?**
💡 **A:** The structure of our diffusion model is referenced to PVD[2], which has a number of parameters of about 27.6 M. We use the sparse point cloud reconstructed by COLMAP as input to train the diffusion model. Our method requires a diffusion model to be trained on each scene, the number of training point cloud samples varies from 40,000 to 100,000 for different scenes. We didn't use data augmentation. The diffusion model generates 1848 points per round based on 200 prior points, and a total of 200 rounds are performed, i.e., 369600 points are generated. For missing regions completions specifically, the model takes as input 200-point partial points and 1,848 points sampled from noise, totaling 2048 points. At each step, the first 200 of the 2,048 points sampled by the model are replaced with the input partial points. The updated point set is then used as input in the next time step. We didn't use the random masking region strategy.
[1] Junzhe Zhang, Xinyi Chen, Zhongang Cai, Liang Pan, Haiyu Zhao, Shuai Yi, Chai Kiat Yeo, Bo Dai, and Chen Change Loy. Unsupervised 3d shape completion through gan inversion. In CVPR, 2021.
[2] Linqi Zhou, Yilun Du, and Jiajun Wu. 3d shape generation and completion through point-voxel diffusion. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), pages 5826–5835,October 2021.
---
Rebuttal Comment 1.1:
Comment: Dear authors,
Sorry for the late reply as I've been on a very busy schedule recently. You just said that your diffusion model is trained for EACH scene (independently?) which added to my confusion. If I understand correctly, essentially you are training the diffusion in a self-supervised manner, so even if you don't have ground truths for a denser point cloud, you could still train from the sparse COLMAP points.
This, however, is different from the GAN inversion method [1] where a prior is already learned through a training set. Hence I am wondering if the comparison is still fair in this case.
Best,
---
Reply to Comment 1.1.1:
Comment: Thank you for taking time out of your busy schedule to make insightful comments.
For each scene, we train one diffusion model. We downsample the sparse point cloud x_s reconstructed by COLMAP to get an even sparser point cloud z_0. Then we further downsample z_0 to get the sparsest point cloud x_0, where x_s, z_0 and x_0 have progressively sparser relationships. Our training process recovers z_0 from the sparsest x_0. During testing, we take x_s as input to generate a denser super-resolved point cloud. This learns the prior for each scene. When using GAN inversion method for point cloud upsampling, we employ the same data organization and augmentation. So, our method is somewhat relatively fair to the GAN inversion method.
At the same time, we would like to explain the necessity of independent training. The distribution of point clouds varies greatly from scene to scene, especially for large-scale scenes. For this a single pre-trained model has limited generalization capability, while independent training can better capture unique geometries of the scene. Moreover, our diffusion model is still somewhat robust. The network architectures and hyperparameters are shared without re-designing. Only the weights differ. | Summary: This paper proposes an implicit neural representation for large-scale scenes with two major components, the first one is the point diffusion implicit function (PDF) which adopts diffusion process to generate dense point cloud from point cloud produced by Colmap, the second one is the background rendering which basically borrow the idea from Mip-Nerf360.
Strengths: • The paper is well organized and easy to follow.
• The qualitative comparison on the fly-view dataset clearly shows the effectiveness of the proposed method on the large-scale scenes, where the rendered images are sharp in both foreground and background.
• Most of nerf paper focus on small or medium scene, this paper instead provides a interesting solution to the large scale scene representation.
Weaknesses: • The effectiveness of the diffusion process, the motivation behind adding the diffusion process over colmap generated point cloud is not clear to me since the PointNerf method can generate clean and sharp foreground rendering directly with the colmap point cloud. Some explanation or visualization showing the point cloud before and after diffusion will be very helpful.
• One major advantages of point-cloud based neural field is they can avoid prohibitive reconstruction time of Nerf, so a question what is the training time and rendering speed of this paper. Will the diffusion process significantly slow down the training speed?
• The overall technical contribution of this paper is weak. The main component PDF shares the major design with PointNerf except applying a diffusion process over the Colmap-generated point cloud. Currently there is no enough discussion over the insight behind this design as I said before. The background rendering module also follows the existing technique from Mip-Nerf 360. Therefore I think the contribution is not enough.
• In the experiment part the author said they use two dataset for evaluation but I didn’t find the result on BlendMVS
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: • How was the diffusion process necessary for large-scale scene representation, I do see an ablation study about that module, but I am not sure why removing the diffusion make the method totally failed but the PointNerf can works well with pointcloud directly from COLMAP. More explanation or visualization will be very helpful.
• What’s the runtime of the proposed methods.
• A general (but maybe not that related to the proposed method): Is there any other solutions to the background rendering? Will the environment map work for the background of the large outdoor scene.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 1 poor
Limitations: The authors have discussed potential negative social impacts of this paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate your approval of our idea and the detailed and insightful comments. Your concerns will be addressed in the following comments and the final version of our paper will be updated accordingly.
📝 **Q: The combination of novel diffusion-based point cloud upsampling models and Point-NeRF-based NeRF models is relatively weak.**
💡 **A:** For unbounded, large-scale outdoor scenes, Point-NeRF-based NeRF models are unable to accurately represent and synthesize fine-grained textures due to the vast sampling space and excessively sparse point clouds, as seen in the first column of Figure 5. To overcome this critical issue, we propose a novel diffusion-based point cloud upsampling module that generates dense scene surfaces. By incorporating an explicit surface prior, we reduce the sampling space of the Point-NeRF-based NeRF model from an unbounded 3D urban-level scale to the scene surface. This effective combination of explicit and implicit representations is the key to the success of our method and provides an effective solution for large outdoor scenes. Furthermore, the effectiveness of large-scale point cloud upsampling is difficult to measure through visualization due to the high degree of object occlusion and overlap. Neural rendering provides a solution for evaluating this research.
📝 **Q: The results on the BlendMVS dataset.**
💡 **A:** We sincerely apologize for the confusion. Due to limitations in space and time, we have included the experimental results of the BlendMVS dataset in the supplementary materials. For further details, please refer to Table 4 and Figure 6 in the supplementary materials, which will be incorporated into the final version of the manuscript.
📝 **Q: The necessity of the diffusion module.**
💡 **A:** For outdoor urban-level scenes, the point cloud reconstructed using COLMAP tends to be sparse, especially in unbounded background regions, resulting in blurry foreground and missing background in rendered images. To demonstrate this, we provide quantitative results of applying Point-Nerf to the unbounded large-scale OMMO dataset in the following table. It can be observed that Point-Nerf is not directly applicable to large scenes. Additionally, we present visualization results as shown in Figure 9, where severe occlusions in outdoor large-scale scenes cause the loss of most geometric details when projecting the point cloud onto 2D. To facilitate performance observation, we conducted experiments on the hotdog scene in the NeRF-Synthetic dataset. It can be observed that our method not only enhances the geometric structure of the foreground but also fills in the missing parts.
|Method | PSNR↑ |SSIM↑ |LPIPS↓|
|:-:|:-:|:-:|:-:|
|Point-Nerf| 17.85| 0.57 |0.320|
|**Ours**| **25.10** |**0.79**|**0.205**|
📝 **Q: What's the runtime of the proposed methods.**
💡 **A:** We evaluate the training and rendering time of our method on the 5th scene (Sydney Opera House) from the OMMO dataset. At a resolution setting of 1280×676, our method takes about 38 hours to train and 12 seconds to render on a single Nvidia A100 GPU device.
📝 **Q: Additional solutions for background rendering.**
💡 **A:** Foreground-background modeling, originating from NeRF++, is a commonly used method for scene neural implicit representation. This approach includes subsequent works such as Neurs, VolSDF, CoCo-INR, among others. Additionally, environmental texturing is a prevalent technique, often applied to model individual objects. However, for scenes, the background is often an integral component, making the use of foreground-background modeling more prevalent. | Summary: In this paper, the authors propose a new approach to tackle the reconstruction of Neural Radiance Fields from large, unbounded scenes. A major limitation of current neural fields is the lack of scalability due to the number of sampling points in empty space, which is limiting the achievable scale or quality with limited compute resources. In this method, the sampling space is restricted to the surface of the object. To get an approximation of the surface of the scene, they utilize a point cloud, which first is reconstructed through MVS and upscaled to achieve a dense point cloud on the surface using a diffusion probabilistic model. Only sampling points in the near distance of the surface point cloud and the respective feature, which is constructed as a predicted feature from the neighborhood points and sampling points and forms the foreground feature together with the constructed feature. The unbounded background of the scene is modeled on a sphere as features. Foreground and background features are fused in another MLP.
Strengths: - The presented method builds on known techniques for neural rendering and generative super-resolution and combines those to solve a novel task of reconstructing large-scale unbounded scenes. I think this is a big strength of the paper, which is justified by qualitative and quantitative results and ablation studies.
- The training approach shows a clear explanation of how the missing ground truth data for the point cloud can be optimized with the limited resources
- The authors present their method in an understandable way, adding the necessary context and references to reproduce the work
- Presented results show a clear advantage over standard NeRF techniques, and better results in areas of fine textures compared to methods designed for large-scale or unbounded scene
- Most design choices are ablated and justified in separate studies
Weaknesses: - One major weakness of the proposed method is that it is required to retrain the whole model per scene. This is also mentioned by the authors in their limitations but leads to really long training times compared to sota methods.
- While the result section presents all relevant quantitative results in a table, the authors do not provide an explanation for some of the outliers and rather good performances of the baselines in some scenes.
- I hope I have not missed something, but there is no explanation of Fig.4. To my understanding, this provides a scene in which Mip-NeRF 360 fails completely to reconstruct the scene, but no explanation is given. Also, I am curious why it results in something like this and not only blurry results as reported in Fig.3
- One major limitation or experiment important for the specific task of large-scale, outdoor scene reconstruction is to show the boundaries of the method in terms of scene scale. Mega-NeRF is specially designed to divide a scene in multiple sections and might be able to handle larger scenes, which is not the case for this method. An additional study on the limitation of this method in terms of scene size alongside with training and rendering times would probably just strengthen the claims
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: - Can you provide the training and rendering times for a given resolution for your and all reference methods
- The authors mentioned that the method was only evaluated on a subset provided by the authors of OMMO, but the full dataset results in the supplementary provide similar insights.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Most limitations are mentioned at the end of the paper. As discussed in the weakness section, I think an evaluation of limitation wrt scene size and computational times in a similar setting, would provide a better understanding of the capabilities of this methos.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate your approval of our idea and the detailed and insightful comments. Your concerns will be addressed in the following comments and the final version of our paper will be updated accordingly.
📝 **Q: long training times caused by per-scene optimization.**
💡 **A:** Due to the challenge of establishing effective and generalizable representations for large-scale scenes containing a multitude of object categories, the common practice is to optimize on a per-scene basis. In comparison to existing state-of-the-art (SOTA) methods, our rendering time is acceptable, thanks to the utilization of a dense point cloud prior that reduces rendering time. A comparison of training times with SOTA methods is presented in the table below and will be included in our final version. Additionally, in future work, we aim to explore the utilization of a cross-scene point cloud upsampling generalization using a diffusion model, instead of training a diffusion model for each scene, to enhance efficiency.
|Method | Nerf |Nerf++ |Mip-Nerf|Mip-Nerf 360|Point-Nerf|Ours|
|:-:|:-:|:-:|:-:|:-:|:-:|:-:|
|Training time(h)| 7.2 | 9.5 |11.2|9.1|8.0|38.0|
|Tendering time(s)| 3.4 | 5.2 |8.7|10.3|6.3|12.0|
📝 **Q: Explanation for some of the outliers and performances.**
💡 **A:** Thank you for your suggestion. Our proposed method does not have any bad cases, but it may exhibit suboptimal performance in scenes with relatively flat spatial structures, such as grasslands (scan6, scan17), roads (scan1), and plazas (scan9). This is because the reconstructed point cloud is approximately planar and lacks sufficient geometric priors present in other scenes. Similarly, NeRF tends to fail in unbounded scenes (scan1, scan27, scan33) and scenes with abundant reflective surfaces (scan26), which is similar to Mip-NeRF (scan22, scan32, scan33, scan26). NeRF++, Mega-NeRF, and Ref-NeRF are relatively robust methods, but they may produce relatively blurry renderings for large-scale urban scenes. Mip-NeRF 360 requires high data robustness, as less than 10% of abnormal camera poses can lead to the failure of the entire scene (scan16).
📝 **Q: An analysis of the causes of the bad case in Mip-NeRF 360.**
💡 **A:** As illustrated in Figure 4, mipnerf360 encountered failure in an outdoor jungle scene characterized by a substantial amount of repetitive textures and similar details. We visualized the camera trajectory provided by the scene and cross-referenced it with the original video, revealing an erroneous calibration of a specific viewpoint (as depicted in Figure 12 in the attached PDF), which likely had a significant impact on the performance of Mip-NeRF 360. In contrast, our approach exhibits enhanced robustness by reconstructing and enhancing the scene point cloud, providing a visual representation that incorporates prior knowledge to counteract partially erroneous data.
📝 **Q: The method's upper limit on scene scale.**
💡 **A:** Thank you for your suggestions. According to the OMMO dataset, their scene areas range from 2km2 to 1000 km2. Mega-NeRF divides a large scene into multiple smaller scenes to accelerate and reduce the fitting difficulty of each NeRF unit. However, for city-scale scenes, each subpart remains too large to generate highly detailed textures. We did not adopt this approach, considering that partitioning would disrupt the integrity of the geometric structure, which is unfavorable for the point cloud super-resolution module to learn the underlying structure of the entire scene.
📝 **Q: dataset issues.**
💡 **A:** We sincerely apologize for the confusion. Due to the initial release of only a representative subset of the OMMO dataset, we obtained the complete data shortly before the submission deadline. As a result, the additional scene performance was included in the supplementary materials. In the final version, these two parts will be merged to report the overall results on the entire OMMO dataset.
---
Rebuttal Comment 1.1:
Title: Commet
Comment: Thank you for the extensive answers to my questions. Those provide the necessary context to understand the results and computational costs of the method and I encourage the authors to add those to the main paper
Wrt. the bad case in Mip-NeRF360 I do not think it is a fair comparison for this specific case. Although Figure 4 is presenting robustness across scenes. With the lack of context provided in the attached pdf, this is not painting representative of the performance of the method. It would be good to either change the wording or add some context to the Figure description.
Thank you for addressing the confusion concerning the OMMO dataset. That makes sense!
---
Reply to Comment 1.1.1:
Comment: We appreciate your constructive feedback and are pleased to address the majority of your concerns.
When considering the performance comparison with Mip-NeRF 360, since they are all based on the same data, all evaluation methods exhibit a degree of relative comparability, even if individual scenes have incorrectly calibrated perspectives. Concurrently, large-scale NeRF often necessitates enhanced robustness. Given that real outdoor large-scale scenes inherently encompass numerous disturbances that are unfavorable to NeRF, such as moving individuals or vehicles, alterations in illumination, high dynamic range, etc., inaccurately calibrated viewpoints can, in a sense, be regarded as a form of disturbance.
Furthermore, the table below presents experiment results excluding failed scenes for Mip-NeRF 360, where our PDF method still outperforms Mip-NeRF 360.
|Method | PSNR↑ |SSIM↑ |LPIPS↓|
|:-:|:-:|:-:|:-:|
|Mip-NeRF 360| 23.85 | 0.70 |0.395|
|**Ours**| **25.30** |**0.80**|**0.199**|
We are appreciative of your constructive feedback, which significantly contributes to the enhancement of our manuscript's quality. Additional evaluations/ablations and corresponding elucidations will be incorporated into the final version. | Rebuttal 1:
Rebuttal: We thank all reviewers for their constructive feedback and recognizing our method as **achieving better qualitative and quantitative performance** (yNkC, DZNn, rzgR, H9co, v7wN); **introducing a novel and effective diffusion-based up-sampling module for large-scale point clouds** (yNkC, DZNn, H9co, v7wN); **capable of generating detailed texture areas** (DZNn, rzgR, H9co, v7wN); and **being well-organized and easy to understand** (DZNn, rzgR, v7wN). We will address their concerns below.
Pdf: /pdf/85bde3350826796fc39d4574a1f6e9d27d5a3304.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: The paper proposed a novel method for the novel view synthesis of large-scale outdoor scenes. The method combines a Point-NeRF-baed rendering stage for the foreground, as well as a background stage based on Mip-NeRF 360. To cope with the issue of overly sparse point cloud from COLMAP MVS, a point could diffusion model is trained on each individual scene from existing MVS point cloud, and used to synthesize more points. The proposed method achieves leading performance on both large and small scenes.
Strengths: * Training a scene-specific diffusion model for point cloud upsampling is a novel approach that seems to work well.
* The proposed method is able to achieve better quality on a large set of real-world scenes both quantitatively and qualitatively.
Weaknesses: * The proposed method requires a diffusion model to be trained on each scene, which can require significant computation.
* In terms of comparison with previous works, it is not sure if the comparison is based on a level ground, as the proposed method might require more computation, more parameter count, or both.
* It is not sure if the proposed method is able to outperform methods that are based on grid data structure, such as InstantNGP, Plenoxels, etc., which tend to be very fast and scalable to large, sparse and unbounded scenes.
* It is not sure if other ways of upsampling the point clouds will also work. Although the use of a scene-specific diffusion model to upsample point could is novel, it is not compared against any baselines. It might provide better insight and more helpful for future work if this contribution can be studied independently as a separate work.
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: * Could you elaborate more on the reason behind the failure of Mip-NeRF 360 in Figure 4?
* Why is the background partially missing in Fig. 5 column 1?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 3 good
Contribution: 2 fair
Limitations: Although there is a Limitations section, it only includes future works and there lacked discussions on the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate your approval of our idea and the detailed and insightful comments. Your concerns will be addressed in the following comments and the final version of our paper will be updated accordingly.
📝 **Q: The significant computation caused by per-scene optimization.**
💡 **A:** As your concern, pre-scene optimization is currently a prevalent approach for constructing implicit neural representations. Although there have been some advancements in NeRF generalization methods, they are still limited to representing toy or mini-abstract scenes. This limitation arises from the inherent difficulty in establishing a universal and effective representation for complex scenes or objects, particularly for unbounded outdoor scenes that encompass thousands of objects across numerous categories within each scene. Consequently, this paper adopts the per-scene optimization strategy to construct dense point clouds and neural implicit fields for each scene, thereby facilitating the simultaneous learning of representations for multiple object categories within the scene.
📝 **Q: Performance improvement due to large model size or design.**
💡 **A:** To validate the improvement originating from our proposed point cloud upsampling module versus additional parameters, we expanded the MLP layers of PointNeRF and Mip-NeRF360 to roughly match the parameter count of our approach. The following table shows the experimental results. Directly increasing the parameter count does not lead to additional performance improvements, as it can make the network more challenging to optimize and prone to overfitting on the training viewpoints (Due to time constraints, this experiment is exclusively validated on scan5).
|Method |Params | PSNR↑ |SSIM↑ |LPIPS↓|
|:-:|:-:|:-:|:-:|:-:|
|Enlarge PointNeRF| 38.19M |19.92| 0.77 |0.362|
|Enlarge Mip-NeRF360| 36.10M |22.01| 0.77 |0.322|
|**Ours**| 38.68M| **27.58** |**0.90** |**0.162**|
📝 **Q: Additional point cloud upsampling approaches.**
💡 **A:** We appreciate your insightful suggestion. In conjunction with the recommendation from Reviewer H9co, we have replaced our diffusion-based method with a Generative Adversarial Network (GAN)-based point cloud up-sampling approach[1]. As evident from the table below, our method exhibits superior performance compared to the GAN-based point cloud upsampling method, primarily due to its ability to preserve the structural and topological characteristics of point clouds while effectively handling incomplete or noisy point cloud data (Due to time constraints, this experiment is exclusively validated on scan5, scan11 and scan12).
|Method | PSNR↑ |SSIM↑ |LPIPS↓|
|:-:|:-:|:-:|:-:|
|GAN-based method| 24.83| 0.86 |0.161|
|**Ours**| **28.60** |**0.90** |**0.137**|
📝 **Q: An analysis of the causes of the bad case in Mip-NeRF 360.**
💡 **A:** As illustrated in Figure 4, MipNeRF 360 encountered failure in an outdoor jungle scene characterized by a substantial amount of repetitive textures and similar details. We visualized the camera trajectory provided by the scene and cross-referenced it with the original video, revealing an erroneous calibration of a specific viewpoint (as depicted in Figure 12 in the attached PDF), which likely had a significant impact on the performance of Mip-NeRF 360. In contrast, our approach exhibits enhanced robustness by reconstructing and enhancing the scene point cloud, providing a visual representation that incorporates prior knowledge to counteract partially erroneous data.
📝 **Q: The reasons for the background missing when removing the diffusion-based point cloud up-sampling module.**
💡 **A:** We apologize for any confusion. Our intention with the ablation study was to create an additive chain, starting from the original multi-view reconstruction point cloud and progressively incorporating our diffusion super-resolution module and background module. Therefore, 'w/o diffusion' actually implies 'w/o diffusion and background'. In our work with city-scale outdoor scenes, the point clouds obtained from multi-view reconstruction methods are extremely sparse, particularly in boundary-less background areas, leading to the absence of background in the rendered images. We will rectify this in the final version and update the results of the 'w/o diffusion' experiment(Due to time constraints, this experiment is exclusively validated on scan5).
|Method | PSNR↑ |SSIM↑ |LPIPS↓|
|:-:|:-:|:-:|:-:|
|w/o diffusion, w/ background| 21.05| 0.83 |0.219|
|**Ours**| **27.58** |**0.90**|**0.162**|
📝 **Q: Comparison with methods based on grid data.**
💡 **A:** In contrast to nerf, InstantNGP uses a sparse parameterized voxel grid instead of mlp for scene representation, which compresses the training time from hours to minutes or even seconds. To explore the performance of this method on unbounded large-scale scenes, we trained InstantNGP on three scenes from the OMMO dataset. With a resolution setting of 640*320, after 10 seconds of training, the PSNR of the rendered images averages about 25.69. Qualitative results(as depicted in Figure 11 in the attached PDF) show that although InstantNGP can perform 3D scene modeling very quickly, it is hard to render the details of foreground objects and the background is often completely blurred.
[1] Junzhe Zhang, Xinyi Chen, Zhongang Cai, Liang Pan, Haiyu Zhao, Shuai Yi, Chai Kiat Yeo, Bo Dai, and Chen Change Loy. Unsupervised 3d shape completion through gan inversion. In CVPR, 2021.
---
Rebuttal Comment 1.1:
Title: Further questions
Comment: I would like to thank the authors for the rebuttal. Some further questions:
1. For the InstantNGP experiment, will the result improve with longer training? 10 seconds of training seems to be too short.
2. I wonder if you have used the NeRF++/MipNeRF360 scene contraction in the Instant NGP experiment?
---
Reply to Comment 1.1.1:
Comment: Thank you for your insightful comments.
Regarding the first issue, on the OMMO dataset, InstantNGP demonstrates the ability to converge within a short training time and achieve comparable reconstruction quality. Further increasing the training time does not result in significant improvements in quality, aligning with the original design intention of InstantNGP as a fast-converging framework.
Regarding the second issue, we do not use scene contraction in the InstantNGP. This is because scene contraction requires applying spatial transformations to compress the scene to the unit sphere, which requires modifying the ray-marching logic to match the curvature of the rays to the transformations, increasing the complexity of the algorithm. The spatial transformation also affects the resolution distribution of the encoding; the encoding in InstantNGP depends on linear resolution, and scene contraction breaks the resolution linear relationship, which reduces the reconstruction quality. | null | null | null | null | null | null |
L2T-DLN: Learning to Teach with Dynamic Loss Network | Accept (poster) | Summary: The paper introduces the concept of teaching in machine learning and proposes a framework called L2T-DLN (Learning to Teach with Dynamic Loss Network). The framework aims to address the limitations of existing approaches by incorporating the temporal nature of loss function adjustment and utilizing the states of the loss function. The authors formulate the loss adjustment as a temporal task using a teacher model with memory units and propose a dynamic loss network to enhance the interactions between the teacher and student model. Extensive experiments demonstrate the effectiveness of the approach on various real-world tasks.
Strengths: 1. The paper presents a novel framework, L2T-DLN, which integrates the concept of teaching with dynamic loss functions. This approach stands out by considering the temporal nature of loss function adjustment and utilizing the states of the loss function, setting it apart from existing works.
2. The authors provide a thorough convergence analysis of the proposed framework, demonstrating its effectiveness and its ability to achieve convergence.
Weaknesses: 1. Unclear Motivation: The paper lacks a thorough discussion of the benefits of introducing the gradients concerning loss functions. It would be beneficial to include a clearer motivation.
2. Marginal Improvement: The reported improvements of 0.4% lower than ALA on ImageNet for classification and only 0.3% improvement compared to the baseline on mIoU on the VOC dataset for segmentation are relatively small.
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: 1. Further clarification is needed in the paper on why it is necessary to consider the gradients concerning loss functions as a core idea. Besides experimental results, could there be theoretical analysis supporting this aspect?
2. The authors introduce a Dynamic Loss Network instead of using a dynamic loss function. It would be helpful to explain if this choice leads to a larger number of parameters and potential unfairness in comparisons. Please clarify the impact of parameter quantity in this context.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 3 good
Contribution: 2 fair
Limitations: The implementation of this approach necessitates substantial computational resources for calculating high-order derivatives.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your valuable comments.
Weakness:
1. Previous studies involve directly supplying certain states of the student model to a teacher for dynamic loss adjustment (kindly refer to Lines 28-30, main paper). This direct provision of states without integration hinders L2T convergence, as the teacher model necessitates further learning to integrate these state-based insights effectively. In contrast, the gradient concerning DLN achieves holistic information integration throughout the learning process (kindly refer to Lines 130-136, main paper), facilitated by prior knowledge (chain rule). Employing the gradient concerning the loss allows the teacher model to concentrate on capturing and preserving crucial information from gradients, negating the need for supplementary handling of dispersed states.
2. The performance of ALA (reference [15], main paper) benefits from two aspects:
(1) The length of a student learning stage ALA set (200) is much larger than ours (25). Note our ablation showed that the size of the length is positively correlated with the test accuracy and computational consumption (Lines 276-283, main paper);
(2) ALA employed a multi-student training strategy to achieve stronger performance with an extra 20%-50% cost over other methods [A].
Therefore, considering its computational resources, our improvement is not marginal compared to ALA.
Questions:
1. (1) For the benefits of the gradient concerning the loss, please refer to Weakness 1;
(2) Please kindly refer to Lines 130-136, main paper, for theoretical analysis. The gradient concerning the DLN involves the information of both training and validation data. As these pieces of information depend on each other, they are integrated into the temporal changes of the student. The gradient concerning DLN plays a crucial role in fusing these pieces of information. Therefore, compared to the state of the student, the gradient is the integration of information throughout the learning process, which provides more information to promote deep interaction between DLN, teacher, and student.
2. (1) At the testing stages, the number of parameters of the student model is the same in ours and other baselines. The comparisons are fair. Note, in the L2T framework, the teacher and the dynamic loss network only affect the optimization of the student model during the training phase, while the student model operates independently during the testing phase;
(2) The number of parameters in DLN is 5000. Compared to commonly used models, e.g., the parameter of ResNet8 is 1235274, 5000 number of parameters is relatively small;
In response to the reviewer's concerns, we construct an experiment. We added 5000 parameters to the teacher model of baseline 'Stochastic Loss Function' (SLF, reference [22], main paper). The performance of SLF on CIFAR-10+ResNet8 with a testing accuracy of 89.97%, while our L2T-DLN still maintains a performance advantage (90.65%).
[A] Huang, C., Zhai, S., Talbott, W., Bautista, M. A., Sun, S. Y., Guestrin, C., & Susskind, J. Addressing the Loss-Metric Mismatch with Adaptive Loss Alignment Supplementary Material.
---
Rebuttal Comment 1.1:
Comment: The response solves my concerns. I suggest incorporating the comparison of "the length of a student learning stage" into Table 1 to better highlight the superiority of DLN. Thus, I improve my final rating from 4 to 5.
---
Reply to Comment 1.1.1:
Title: Thanks to reviewer HMfR
Comment: Many thanks for all the helpful comments and positive assessments.
We really appreciate reviewer HMfR for upgrading the score. Thank you again for your valuable comments and time efforts. We will incorporate the comparison of "the length of a student learning stage" into Table 1 to better highlight the superiority of DLN. | Summary: This paper introduces an improvement towards the learning-to-teach framework. Compared with the previous works, the authors made an innovation that a dynamic loss network is added with LSTM acting as teacher model to enhance the temporal memorization. A three step optimization procedure is also proposed with an convergence analysis theoretically. Last extensive experiments are conducted on various computer vision tasks that demonstrated the effectiveness of the proposed approach.
Strengths: 1. The problem this paper studied is an important one yet insufficiently studied in the current literature, that is how to use a better teaching strategy in addition to merely conduct "learning". It's a very meaningful step forward of this paper that pushes forward the frontier of such a literature; Furthermore the design of dynamic loss network is a new perspective for this area as well, going beyond the dynamic loss function design.
2. The paper is technically very sound with enough insights and depth. I particularly appreciate the convergence analysis which is done first time for the machine teaching/learning-to-teach area, with a good depth leveraging the negative curvature.
Weaknesses: As also stated in the paper, despite being theoretically sound and elegant, the proposed algorithm incurs much more computational overhead which will hinder its practical usefulness in real world applications. Particularly the compute of Hessian even three-order derivatives is a sign of its computational complexity. I agree with the author that this needs to be improved further in the future to enlarge the impact of this work.
Technical Quality: 4 excellent
Clarity: 3 good
Questions for Authors: While the memory units of LSTM make sense in this scenario, how about use Transformer as another sequential model which may even enhance the temporal memorization compared with LSTM?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 3 good
Contribution: 4 excellent
Limitations: Stated in the paper as well as the above point, that the computational complexity is one concern while it seems feasible and justifiable to further solve this, as the future potential value of this direction is likely large.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: 1. Our target for the submission is to re-visit L2T and formulate the loss adjustment as a temporal task. We hope our discussions inspire other further work. Thank you for your valuable and attractive suggestions for using transformer architecture as a teacher. Using a transformer as the teacher model involves the simultaneous processing of sequential data across timestamps. It might meet challenges, e.g., the accumulation of historical gradients and pressure on storage resources. We will explore the suggestion in the future.
2. Thank you for your valuable comments and efforts. We will attempt the following strategies in our future work:
(1) Design of appropriate prior knowledge for enhanced fusion to curtail the need for extensive higher-order gradient computations;
(2) Employing finite element method paradigm by partitioning the parameters associated with the student, DLN, and teacher models into discrete finite elements;
(3) Integration of Multivariate Taylor Series Expansion to more effectively navigate complex computations while optimizing resource consumption. | Summary: In this paper, the authors state that existing methods only employ a simple feedforward network as the teacher model, which limits the potential of L2T. This issue motivates authors to propose a network with a memory unit to enhance the temporal analyzing ability of the teacher in the learning to teach (L2T) tasks. This proposed new network is combined with a dynamic loss network to address the issue of the existing works above. The experiment shows the superiority of the proposed method.
Strengths: (+) This paper theoretically analyzes the convergence of the proposed method.
(+) The proposed method shows the state-of-the-art performance compared with other methods.
Weaknesses: (-) The novelty of this paper is not clear. In the title, abstract and Introduction, the authors repeatedly emphasize that they propose an L2T framework with a Dynamic Loss Network (L2T-DLN). However, these emphases can cause misleading since Dynamic Loss has already been introduced in previous work, as stated by the authors themselves in Lines 2, 23, and the caption of Figure 1. Consequently, the application of Dynamic Loss in L2T cannot be considered as the novelty or the main contribution of the paper. The continued highlighting of DLN in these sections adds to the confusion and may hinder the understanding of the paper. In my understanding, the primary contribution of this paper is a proposed network with a memory unit to enhance the temporal analyzing ability of the teacher in the learning to teach (L2T) tasks. By emphasizing this aspect, rather than the Dynamic Loss network, the paper will become more coherent and easier to comprehend.
(-) The authors have not adequately demonstrated the effectiveness of their proposed method in both theory and practice. In the theoretical analysis, they solely focus on the convergence of the proposed L2T-DLN and fail to illustrate how the proposed network with a memory unit for the teacher can benefit the training of L2T task. Additionally, the experiments show the better performance of the proposed method compared with other SOTA methods. However, based on section 5.1, this better performance is based on many combinations of loss functions and tricks. It remains unclear whether the proposed network with a memory unit for the teacher contributes to the enhanced performance observed in the L2T task.
(-) The paper lacks sufficient ablation studies to support its claims. In the ablation study section, the authors include some experiments related to selecting the best learning parameters for their proposed method; these parameters may primarily influence performance rather than directly relating to the main contributions of the proposed method. Therefore, it is crucial for the authors to conduct a series of ablation studies to demonstrate how the proposed network with a memory unit for the teacher can benefit the training of L2T tasks.
Technical Quality: 2 fair
Clarity: 1 poor
Questions for Authors: See weakness.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 1 poor
Contribution: 1 poor
Limitations: No limitations
Flag For Ethics Review: ['No ethics review needed.']
Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your valuable comments.
Weaknesses:
1. The novelty of our approach is introduced in Lines 8-13 and Lines 50-55, main paper, which are:
(1) design a teaching strategy based on the gradient concerning DLN;
(2) use LSTM as the teacher model to update the DLN with the temporal information;
(3) provide a convergence analysis of the approach.
Using a network with a memory unit to enhance the temporal analyzing ability of the teacher is one of our contributions.
The introduction of background knowledge, i.e., existing studies of L2T with dynamic loss, is necessary. Lines 2, 23, and Figure 1 (main paper), aim to help readers understand the difference between L2T-DLN and existing works about L2T with dynamic loss (DL). By realizing the differences, the readers will not be misled that the usage of DL is our contribution.
2. Please kindly refer to Section 3 and Section 5 for theoretical analysis and effectiveness demonstration of our approach. In practice, we evaluate three downstream tasks, i.e., image classification, object detection, and semantic segmentation. Experiments on three downstream tasks (refer to Section 5.2, main paper) and ablation study (refer to Lines 284-291, main paper) demonstrate that:
(1) L2T-DLN has shown state-of-the-art performance compared with other methods on different downstream tasks;
(2) The ablation study has drawn two conclusions: the first is that algorithms that can use historical information perform better than those that cannot, and the second is that the adaptability to capture and maintain short- and long-term dependencies can further enhance the loss function teaching, compared to handcrafted method.
3. Please kindly refer to Section 5.3 in our main paper and Section 5.2 in the supplementary material for the ablation study. The ablation study of w/o memory unit is shown in Lines 284-291, main paper, and Lines 215-221, supplementary material. | Summary: This paper presents a three-stage framework that dynamically adjusts the learning process of student model (i.e., target model). The loss value is calculated via the proposed Dynamic Loss Network (DLN), parameterized as neural network. And the DLN is updated by the teacher network implemented as LSTM. The authors also provide a convergence analysis. The model is evaluated on an array of tasks including image classification, object detection and semantic segmentation.
Strengths: 1. Compared to previous works, L2T-DLN leverages the state of loss function and the temporal information of student learning experience to update the teacher.
2. Experiments on various application including image classification, object detection and semantic segmentation showcase the effectiveness.
3. The provided convergence analysis is helpful to understand the framework.
Weaknesses: 1. The proposed DLN is not unified. For image classification, it produces the loss value. For object detection, it generates the loss weights.
2. Some technical details of the DLN are missed, as will discussed in next session.
3. To access the temporal information of student learning, L2T-DLN would store the historical models and consume lots of memory.
4. Since the DLN adjusts the loss weights of the objectives of YOLO-v3, it’s reminiscent of multi-task learning. In this regard, the authors should discuss the relationship with MTL.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: 1. The output layer (e.g., activation function) of the DLN is not clearly described.
2. It would be better to understand the DLN if the loss value/weights can be visualized.
3. Figure 1 in the supplementary material indicates the DLN (blue line) would produces lower gradients applied on the student model. Is this because the baseline model uses a large learning rate (0.1 line 219)?
4. The claimed effectiveness would be more convincing if an ablation study of learning rates is provided.
5. Regarding the initial gradients produced by the DLN, which greatly exceeds the baseline, it may be related to the network initialization of the DLN. Since this detail is missing, I would suggest the author clarifying the initialization.
6. For the DLN of object detection, is it initialized as identical mapping? If not, it’s helpful to investigate this manner.
7. What’s the training epoch for baseline model and the L2T-DLN for image classification?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 2 fair
Limitations: 1. Some details (e.g., initialization and activation of the output layer) of the proposed DLN are missing.
2. The used learning rate for image classification may not be optimal, an ablation study would help understand the framework.
3. It would be helpful to visualize the output of DLN.
4. The framework may consume lots of GPU memory.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your valuable comments.
Weaknesses:
1. Objective detection involves two sub-tasks, i.e., regression and classification. It is challenging to handle two tasks simultaneously with one dynamic loss. An alternative way commonly used in existing dynamic loss-based works is to dynamically combine the objective functions of different tasks. Therefore, we follow their setting to perform our L2T-DLN.
2. We provide point-to-point responses in the next sessions.
3. Our memory unit stores the long- and short-term dependencies captured during the teaching process, disregarding the historical models. The size of the memory unit is fixed at 2.55MB and not consumes lots of memory.
4. L2T-DLN employs a dynamic loss set by a teacher to train a student model on a specific task, whereas MTL leverages potential correlations between diverse tasks to train a shared representation network. The L2T-DLN adjusts loss weights based on the temporal relationship between student performance and losses, for the same task, instead of using inter-task correlations as in MTL.
Questions:
1. Our DLN is a fully connected network (kindly refer to Lines 233-235, Lines 246-247, and Line 257, main paper, and Lines 184-186, supplementary material), as well as the output layer. Following existing works, the output layer is not processed by any activation function.
2. We visualize the loss value of DLN on MNIST and CIFAR-10 separately in Figure 1 and 2, rebuttal. The DLN is initialized with the kaiming normal initialization with LeakyReLU activations.
3. No. Both our L2T-DLN and baseline follow the same learning rate settings of related work (references [3, 7, 15], supplementary material) about noise-label classification.
4. We evaluate the student model (ResNet8) concerning different learning rates on CIFAR-10. The learning rates were set to 1, 0.5, 0.1, 0.05, 0.01, 0.005, and 0.001, corresponding to accuracies of 18.8%, 49.0%, 90.7%, 87.6%, 88.1%, 87.3%, and 84.2%, respectively. Therefore, we set the learning rate as 0.1 in our experiment.
5. The DLN experiences random initialization through Kaiming normal initialization coupled with LeakyReLU activations for classification and segmentation tasks. For the objective detection task, all parameters of the DLN are initialized to a value of 1. There is no significant correlation between the substantial gradient and initialization of DLN during the initial training phases. In fact, in the early training stages, the significant DLN gradient prompts substantial changes in the student model, thereby broadening L2T's exploratory reach within the solution space. This expanded exploration facilitates the investigation of numerous local optimal solutions, enabling the selection of the most favorable among them.
6. Yes, it is initialized as identical mapping.
7. The baseline is trained with 200 epochs. In our approach, we set the teacher learning phase for 10 epochs (Line 222, main paper).
---
Rebuttal Comment 1.1:
Comment: Dear authors,
Thanks for the response. This response solves most of my concerns.
I still have some doubts about the used learning rate. The paper claims that it can learn to teach (i.e. optimize) the student model by the produced gradient or loss weights. I wonder if the proposed method is able to **correct** the optimization process if a **wrong** learning rate is used. As your response said, using a learning rate of 0.5 will result in a poor student baseline. Then, your method should **lower** the learning rate (i.e., gradients of low magnitude) to help the student improve the performance.
Kind regards,
---
Reply to Comment 1.1.1:
Title: Thanks to reviewer wSe9
Comment: Thank you for your valuable and constructive comments, we appreciate this opportunity to respond to your comments and address your concerns.
1) Compared with other L2T baselines, e.g., 'Stochastic Loss Function (SLF)', our proposed method has achieved a relatively higher accuracy even with a `wrong` learning rate. Specifically, with dynamic loss and set the learning rate to 0.2, 0.3, 0.4, and 0.5 in the CIFAR-10-ResNet8 task, SLF archives 89.2, 87.5, 66.0, and 13.5 in accuracy, while our L2T-DLN achieve 89.7, 88.1, 70.4, and 49.0. The above comparison indicates that our method outperforms the previous state-of-the-art SLF under the wrong learning rate (0.4/0.5) setting. Compared with SLF, our method is less sensitive to the wrong learning rate.
2) To our knowledge, all L2T methods require setting a proper learning rate for optimization. For example, SLF and L2T-DLF set their learning rate as 0.1. The reason is that a large learning rate will lead to drastic changes in the student model and, therefore, beyond the teaching capability of the teacher model. | Rebuttal 1:
Rebuttal: We thank all reviewers' valuable comments and efforts. Our proposed L2T-DLN is technically very sound with enough insights and depth [79Na], provides a thorough convergence analysis [wSe9, HMfR], and outperforms state-of-the-art methods in various downstream tasks [9Vkw,wSe9]. We provide point-to-point responses in the Official Review system.
Pdf: /pdf/4295c98cd9c13ef4e0578a4f084a91b7771c960f.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Streaming Algorithms and Lower Bounds for Estimating Correlation Clustering Cost | Accept (poster) | Summary: The paper studies correlation clustering in the streaming model where the edges of the underlying graph are updated one at a time. Where previous work has focused on the semi streaming model with $\Omega(n)$ space, they consider more classic streaming with a polylogarithmic number of bits. Here it is not possible to describe the full clustering, so the goal is to estimate the \emph{cost} of the correlation clustering. They consider $(a,b)$-approximation algorithms where if the optimal cost is $OPT$, the algorithm provides an estimate of at least $OPT$ but at most $a\cdot OPT+b$. Their first result is a single-pass streaming algorihtm that is an $(O(1),\delta n^2)$-approximation using $polylog (n)/\delta^5$ space. Here the constant in the $O$-notation is rather large. However, they also present an algorithm which is an $(3,\delta n^2)$-approximation in expectation using $2^{O(1/\delta)} polylog(n)$ space. The first of these results is inspired by the sparse-dense decomposition [AW'22]. At a very high level, using sample based approached, they estimate the number of so called $\varepsilon$-sparse and $\varepsilon$-dense edges and relate these to the optimal clustering cost. This can be done for vertices of degree at least $\delta n$ and the vertices with smaller degrees can contribute at most $\delta n^2$ to the total cost (this is where the additive term comes from). This is by no means trivial and requires substantial technical work. For the second algorithm, they simulate an algorithm from an unpublished manuscript [BGK13].
The paper also provides lower bounds showing that an $(1,n^{2-\varepsilon})$-approximation requires $n^\varepsilon$ space (this is via a reduction from the INDEX problem) and that an $(1.19,O(n))$-approximation requires $\Omega(\sqrt{n})$ space.
The paper also provides experiments demonstrating the performance of their algorithms on the stochastic block model (SBM) and for Erdos-Renyi random graphs (ER). It is of interest that their algorithms are able to distinguish the clusterable setting in SBM from that the ER $G(n,p)$ which are not clusterable at all.
Strengths: It is in interesting problem to study the correlation clustering problem in the low space regime both from a theoretical and practical perspective. The paper is well written and the proofs that I looked at in the supplementary material also seem to be clearly written (I only looked at a few of them so I cannot vouch for correctness, however I gave a 4 for soundness based on the ones I checked). The technical contribution of the paper seems impressive, in particular since they also provide lower bounds for the problem.
Weaknesses: It is a little unclear which ideas are novel and which ideas are inspired from previous work e.g., [AW'22].
Technical Quality: 4 excellent
Clarity: 3 good
Questions for Authors: l85: This seems incorrect. To get sublinear space, you put $\delta=n^{-0.19}$ and then the additive error is $n^{1.81}$
l104-110: This section is a little clumsily written and could use a rewriting to make it clearer
l119: What does $0.04\% \sim 3.6\%$ mean?
l276-277: $S_i$ and $m_u$ have not been defined
l347-348: This seems unsurprising. Could it be proved by some union bound approach?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 3 good
Contribution: 3 good
Limitations: The authors have adequately addressed limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your detailed review and your positive feedback and also for catching all our tiny errors. Below are our responses:
**Point1: Which ideas are new and which are existing.**
Getting approximate clustering in $\tilde{O}(n)$ space via the algorithms (SDD/Pivot) was already known. The novel idea here lies in estimating the costs of these algorithms within polylog(n) space only (especially since it requires exponentially larger $\Omega(n)$ memory to directly run SDD/Pivot).
The main idea for the SDD algorithm was that we could get an $O(1)$ approximation to the cost using epsilon sparse and epsilon dense edges. We then built the estimators to estimate the number of these edges with an additive error.
The main idea for the pivot algorithm was that we could do the greedy MIS (to find the pivots) after the stream. Then we built estimators to estimate the cost of the clusters.
**Point 2: Questions/MISC comments.**
L85: Yes, you are right we made a calculation error. Thanks for the catch! What we meant to say was setting delta to $n^{0.19}$ would make the space $o(n)$ and the error $o(n^2)$. We will fix this.
L104-110: We agree that this paragraph is not very well written. We will rewrite it in the final version.
L119: What we mean here is the following: The fraction of total edges stored by the Pivot-based algorithm is 0.04%.
The fraction of total edges stored by the SDD-based algorithm is 3.6%.
L276-277: $S_i$ and $m_u$ are defined in the full version (supplementary material), but when we were creating the conference version, it appears there are some nuaunce ordering issues (e.g., $S_i$ is defined in the algorithm box, which appears before claim 4.2; but in the conference version it appears later). We will fix this in the final version.
L347-348: We can prove that an Erdos-Renyi random graph with p=0.5 has an optimal clustering cost of $\Omega(n^2)$ using charging arguments similar to [AW22]. We are not familiar with the ``union bound approach’’ mentioned in the review – we will be happy to learn more from you about this. | Summary: The paper gives polylog space streaming algorithm for approximately computing the value of the cost of correlation clustering. An algorithm for finding the (approximate) correlation clustering in polylog space is known to be not possible. It is also known that approximately computing the cost within a multiplicative factor is not possible in polylog space. This paper designs polylog space streaming algorithm for approximating the cost within a constant multiplicative plus an O(n^2) additive factor. They show that such an additive factor is necessary. The techniques used are sparse-dense decomposition of [AW22] and local correlation clustering of [BGK13].
Strengths: 1. The theoretical results are novel, well-motivated, and consistent with the known results. All the bounds are well justified. There aren't significant gaps in the presented results.
2. The paper is well written. The results are well-motivated, related work is clearly stated, and the main ideas are clearly outlined in the allowed space.
3. Experimental results show the capability of the algorithm to distinguish between high and low clusterable instances of random graphs generated in the stochastic block model.
Weaknesses: 1. Using some space to motivate the practical aspects of cost estimation (instead of computing the solution) will be useful for the readers. In the current version, this aspect of the motivation is pointed to the previous works.
2. The experimental section also does not help with point (1) since it separates random graphs based on clusterability. A discussion on a real world example may help.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: - Some of the other queries are mentioned above within the other fields.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 4 excellent
Limitations: This work is mostly theoretical. There are no negative societal impacts.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your detailed review and your positive feedback. Our responses are as follows.
**Point 1: Practical motivation.** Thanks for pointing this out, and we will add the discussion of the motivations in our later version. The motivation can be described roughly as follows. In large-scale applications, sometimes we want to know if the instances are worth clustering or not as resources might be limited; or rather, how much better it would be if we do clustering (trying to understand ‘clusterability’ of the instances). If we can test the clustering cost with only $\text{polylog} \ {n}$ space, then such a task can be accomplished in a local machine (as opposed to using a large cluster as in semi-streaming).
We could also use the value to differentiate between different classes of graphs that have different clustering costs.
We also agree that adding real-world examples will help, and we intend to make changes in the final version.
In our current paper, we believe that separating random graphs with SBMs *is already* a toy example for real-world applications. Consider two types of social networks: one is more structured (like SBM); the other is more chaotic (random graph-like). Our algorithm allows the users to classify different types of social networks in this regard.
---
Rebuttal Comment 1.1:
Comment: This is to acknowledge that I have read author rebuttal. I do not have any other questions. | Summary: The authors initiate a study of the correlation clustering problem in streams, in the setting when only the *cost* of the clustering needs to be output. Prior work on correlation clustering in streams focused on the "semi-streaming" model, in which the clustering must be output but a space of $\tilde\Theta(n)$ is allowed. On the other hand, the authors obtain algorithms using space $o(n)$ and even $\mathrm{poly}\log(n)$ when only the cost but not the clustering itself is required. The problem of computing the cost is motivated as a measure of the "clusterability" cost, and can be used to determine whether it makes sense to compute a clustering or not.
The authors provide both upper bounds and lower bounds for this problem. For upper bounds, the authors show that in $\mathrm{poly}\log(n)$ bits of space, one can obtain a mixed additive-multiplicative error guarantee, with $O(1)$ multiplicative error and $o(n^2)$ additive error. The authors also give show that a purely relative error guarantee cannot be obtained in $\mathrm{poly}\log(n)$ space by showing that an approximation with $c = O(1)$ multiplicative error and $\epsilon n$ additive error requires $\Omega(\sqrt n)$ space, for some choices of constant $c$ and $\epsilon$. Another lower bound that the authors give show that an algorithm achieving a purely additive $n^{2-\epsilon}$ error requires $\Omega(n^\epsilon)$ space. Thus, one cannot hope for too much better than the algorithms given by the authors in general. Furthermore, the authors argue that the $o(n^2)$ additive error is in fact highly useful in practice. Indeed, for a typical application of detecting "clustered" communities in a stochastic block model where edges within communities are connected with probability p > 0.5 (say 0.8) and edges between communities with probability p < 0.5 (say 0.2), the $o(n^2)$ additive error algorithm suffices. The practical uses of the algorithm are verified empirically.
In terms of techniques, the authors give two upper bounds, one based on the classic Pivot algorithm and another based on sparse-dense decompositions developed in prior work. In both cases, the authors carefully implement the algorithms in the streaming setting by running the algorithms only on "heavy hitter" vertices and by gathering necessary statistics using sketching techniques.
Strengths: This work provides both a novel streaming problem with many interesting open questions and nice practical motivations, as well as an interesting and highly nontrivial upper bound. The gap between the upper bounds and lower bounds in this work are highly intriguing, and I would expect this work to stimulate many interesting follow-up works to tighten the results. Thus, the problem that is introduced is extremely interesting. The algorithmic techniques used in the upper bound seem to be a combination of prior work (known algorithms for correlation clustering combined with heavy hitters), but making this go through requires a lot of work in the analysis and is highly nontrivial.
Weaknesses: Just a couple of comments on the empirical results (however this is not that significant since the contribution of this work is primarily theoretical):
* The axes in Figure 1 (and all the figures in the appendix in the supplementary material) are illegible. Please take advantage of the space availability in the supplementary material to provide better plots.
* If $n = 1000, 2000$, then a semi-streaming algorithm would hypothetically use only roughly n/n^2 ~ 0.1% of the edges, so 4% or 10% of the edges does not seem that impressive in comparison at first glance. Are there any comparisons to semi-streaming algorithms in terms of empirical performance?
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: See weaknesses.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 4 excellent
Limitations: The authors have thoroughly discussed the limitations (e.g. the additive error) and justified it (e.g. lower bounds).
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you so much for your detailed review and your positive feedback. Here are our responses to the questions:
**Point 1: Problems with the axes in the Figures.** Thanks for spotting out the issue, and we will fix the figures by making the text on the axes larger.
**Point 2: Implementation of semi-streaming and the percentage of edges used.** We are not aware of any prior implementations of streaming correlation clustering algorithms. It appears that there are quite some barriers to implementing a complete large-scale semi-streaming correlation clustering system (or even large-scale semi-streaming in general) – some aspects that are assumed to not be problems in theory, e.g., polylog(n) overheads, I/O speed, and the need of external memory, become quite problematic in practical systems. Take the connected components sketch for an example: the algorithm was known since Ahn et al. [SODA’12], yet the first implementation was only given until quite recently by Tench et al., [SIGMOD’22] (`GraphZeppelin’). The practical implementation of correlation clustering in semi-streaming space seems to be a very interesting problem on its own.
With regard to the percentage of edges saved, we suspect that the leading constant makes the percentage of stored edges rather high when $n$ is not too large (n=1000 or n=2000). Also, the log factors may become large in practice: for instance, when $n=2000$, $n/n^2=0.05$%, but $n \log^{3}n / n^2=65.9$% (log with base 2, a typical semi-streaming bound). These factors may have contributed to the discrepancy between the theoretical bound and the practical performances and would affect semi-streaming algorithms too. | Summary: This paper studies the correlation clustering problem in the streaming setting. Unlike previous work they consider algorithms with space much smaller than the number of vertices and only find the cost of the optimal clustering, not the clustering itself. They define an (alpha, beta) approximation to be an additive error of alpha*OPT + beta. Their "result 1" and "result 2" are for any constant delta > 0 streaming algorithms with approximations of (O(1), delta n^2) and (3, delta n^2) respectively and space O~(1/delta^5) and O~(2^O(1/delta)) respectively. Their "result 3" and "result 4" are impossibility results. They also give some experimental results.
Strengths: The paper is well written.
The impossibility results ("result 3" and "result 4") look interesting, though I haven't had time to read the proofs.
Weaknesses: The first and biggest weakness of this paper is the "result 1" and "result 2" in the intro can straightforwardly be beaten using known techniques. In particular one can get a (1, delta n^2) approximation in poly(delta) space and time poly(n) + 2^poly(1/delta). To do this simply take a random sample of k = Theta~(1/delta^4) vertices, solve correlation clustering on the sample with an algorithm with delta k^2 additive error, and then scale up the cost by a factor of (n/k)^2. (If one-sided error is desired as in this paper's definitions you also need to subtract roughly delta n^2 from the estimate.) The key idea is the fact that to get delta n^2 additive error it suffices to consider clusterings with O(1/delta) clusters, which makes correlation clustering an instance of the well-studied MAX-CSP family. A sample size of k being enough to approximate the value of MAX CSPs to an additive error first appeared in a paper by Arora, Frieze, K and Karpinski (AFKK) in the 1990s IIRC. (I'll try to track down the exact citation and some more details in a comment to be added this weekend.) For the approximation algorithm one can either use that general-purpose AFKK algorithm or one of the published PTASs for correlation clustering with a constant number of clusters, e.g. [8] in https://en.wikipedia.org/w/index.php?title=Correlation_clustering&oldid=1147372897 (which includes this problem despite the title not mentioning it). The algorithms I mentioned aren't very practical so for experiments you could use applying a branch and bound solver such as SciP, CPLEX, or Gurobi to the correlation clustering integer program for the sample only, which should scale to a sample size of something like 50 vertices, or use any algorithm for correlation clustering (but with worse performance guarantee). To do the random sampling in a streaming fashion keep track of the k vertices with the smallest hashes seen so far and any edges between them.
Another big weakness of this paper is that correlation clustering instances large enough to not fit on one machine and hence be interesting for streaming algorithms are likely to be sparse, i.e. have o(delta n^2) positive edges. In practice one can therefore probably do better than their result 1 or result 2 (or the algorithm I sketched above) by simply returning the number of positive weight edges, which is a trivial upper-bound on OPT (it's the cost of the all-singletons clustering). I can't rule out the existence of applications where this sort of guarantee would be useful, I'm just not aware of any.
Another weakness of this paper is streaming models are only useful for a somewhat narrow circumstance: instances are small enough that you have enough time to pass all the data through one machine but large enough that you don't have space to store the full data on one machine. Many streaming results also work in a distributed model such as MPC, which is more practically relevant in my experience, but the authors don't discuss distributed models so I can't tell without reading everything in detail.
Technical Quality: 2 fair
Clarity: 4 excellent
Questions for Authors: Any comments on the weaknesses mentioned in that section?
=== Misc comments not really questions but I'm not sure where else to put them =====
Abstract: replace "improves the additive error further down" with "further reduces the additive error"
Correlation clustering appeared (but by a different name) in a 1989 paper by GROTSCHEL and WAKABAYASI titled "A CUTTING PLANE ALGORITHM FOR A CLUSTERING PROBLEM"
(https://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.468.4826&rep=rep1&type=pdf). I'm guessing there are probably even earlier papers somewhere. So the 2004 paper you cite as introducing the correlation clustering problem actually reintroduced it.
cost(C) is defined twice, first in section 2.1 and then in section 2.2. Also the definition in section 2.1 is incomplete since it doesn't say whether the cost is the number of disagreements or the number of agreements or something else, though the word "cost" does hint that it's probably the number of disagreements.
Using the shape of the brackets to distinguish N^-[] from N^-() and N^+[] from N^+() is a bit weird since when bracket shape is used to distinguish notations the two notations are usually totally unrelated, not variations of the same thing. I would prefer a more traditional notation, e.g. overbar on the N for the version that includes the vertex itself.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 4 excellent
Contribution: 1 poor
Limitations: See weaknesses section for some limitations. No societal impact issues.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your detailed review and the critical new idea. Please find our responses below.
**Point 1: The suggested $(1, \delta n^2)$-approximation algorithm.** Thank you for raising this observation. If we understand it correctly, the key idea of your suggestion is to reduce the min-disagreement with $\delta \cdot n^2$ additive error to max-agreement correlation clustering with $\delta \cdot n^2$ additive error. Also, it is okay to work with just $1/\delta$ clusters when the additive error is $\delta n^2$. Finally, use some sparsification results to solve the $(1/\delta)$-cluster version of CSP on a small sample. This is a very nice approach that we did not consider previously, and it does indeed seem to us also that an approach along these lines might indeed be plausible.
That being said, we did not find this approach trivial/straightforward, nor could we even be fully convinced at this stage yet that it works without complete proof. If this algorithm was known previously, we have missed it and would appreciate a reference to this result. However, we would like to point out that approaches like this have been tried for (max-agreement) correlation clustering, say paper ``Sublinear Algorithms for MAXCUT and Correlation Clustering’’ by Bhaskara et al. [ICALP’18], and they appear to require substantially more work to address the subtle issues raising in this context, than being a straightforward application of known tools (these tools themselves are also not exactly simple to begin with). In addition, as your review also points out, such an algorithm would require using highly sophisticated “post-processing” algorithms that make the runtime of the algorithm quite high (we suspect at the very least $n^{1/\delta}$-type bound), and the implementation in practice infeasible (although from a purely theoretical result, such a bound fully complements our lower bounds and is obviously of its own interest surely). This is in contrast to our approach which offers practical streaming algorithms with quite straightforward implementations. Thus, we do not see your suggestion as rendering our results trivial and we stand by the contribution of our paper as providing the first truly-sublinear space streaming algorithms for this fundamental problem.
We will be happy to try out the suggested algorithm idea in detail and write a proof in a future version; alternatively, we can also leave the idea since it is not originally from us. We will be happy to hear your opinion on this front.
**Point 2: Concerns about the model.** We were actually rather surprised by the comments about the perceived impracticality of streaming algorithms in general. While we do understand that certain companies, say, Google, may indeed have a strong preference for MapReduce-style computation frameworks, this is certainly not across the board and is quite application-oriented, as is evidenced by the vast amount of work on various streaming platforms. Indeed, streaming algorithms are not designed for processing 20-TB size graphs, but they can target 1-TB size graphs quite easily by making one sequential pass over them very quickly and using RAM for the much smaller memory needed for the algorithm, which gives tremendous speed-up compared to directly running the algorithm on a 1-TB size graph. We refer the reviewer to the papers ''GraphZeppelin: Storage-Friendly Sketching for Connected Components on Dynamic Graph Streams’’ by Tench et al. and ''Practice of Streaming Processing of Dynamic Graphs: Concepts, Models, and Systems’’ by Besta et al. for instances.
That being said, our results **immediately** extend to these other models as well in the same standard way that prior streaming algorithms do. Since both of our algorithms are based on *linear sketches*, they naturally imply fully scalable MPC algorithms with constant number of rounds – we only need an MPC with $O(n^{c}\cdot \text{polylog} \ {n/\delta})$ space per machine and the number of round is $O(1/c)$. The algorithm has to simply run the linear sketch locally, and broadcast with the standard communication tree, and correctness is guaranteed by the properties of linear sketches. In fact, it appears both of our algorithms can be executed in the stronger PRAM model with $O(1)$ depth and $\text{polylog}\ {n}$ work – we can check the details on whether it works in future versions.
**Point 3: Concerns about the correlation clustering instances being sparse.** We believe there are many streaming applications where large graphs have $\Theta(n^2)$ edges. One toy example is the stochastic block model (SBM) we studied in the experiment section, in which the instances have $\Omega(n^2)$ the intra-block edges alone (which do not even contribute to the cost). The SBM has been studied extensively in the literature, and it captures a wide range of social network applications, e.g., community detection.
The SBM is also a prime example to show that simply returning the number of edges gives a bad approximation for the correlation clustering cost. Note when $p$ is large in the SBM, most edges do not contribute to the cost, and simply counting the number of edges would give a terrible approximation (both in theory and in practice). One can take another example: for Erdos-Renyi graphs with $p=0.5$ and an instance from SBM with $p=0.95$, the numbers of positive edges are roughly the same, but the costs are very different.
**MISC comments.** We appreciate the reviewer’s effort to catch the MISC issues. We will fix them accordingly in future versions of the paper. | null | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
L-C2ST: Local Diagnostics for Posterior Approximations in Simulation-Based Inference | Accept (poster) | Summary: This paper defines a new diagnostic to check posteriors learned from simulation-based inference. The main issue in this setting is that one does not have access to likelihoods, and so one needs diagnostics that do not rely on being able to calculate likelihoods. Here the authors present L-C2ST, which essentially tries to train a classifier to distinguish the _joint_ space over both parameters _and_ observations into whether a given point was more likely under the true joint distribution, $p(x, \theta)$ or under a variational approximation $q(\theta | x) p(x)$. It turns out that a Bayes-optimal (probabilistic) classifier will assign equal weight to the two distributions for all $\theta$ for a given $x_0$ if and only if $q(\theta | x_0) = p(\theta | x_0)$ (up to some technical considerations). As a result, the authors propose training such a classifier and then seeing how much its weights deviate from $0.5$ and using that as a measure of how well $q$ matches $p$. Finally, the authors develop a hypothesis testing framework (to account for randomness in the trained classifier) to determine how significant a given deviation is.
Strengths: * Trustworthy checks of how good a given variational posterior is for real settings where we do not have access to the true posterior are sorely needed, and this work is an important step in that direction.
* The paper is well-written, well-motivated, and clearly explained.
* The theoretical results complement and motivate the proposed algorithm well (but see below for some minor technical concerns).
* The proposed algorithm is extremely simple to describe, making it practical and elegant.
Weaknesses: * My major concern is that I am not totally convinced that learning the proposed classifier is any easier than learning the variational posterior. As such, I would be concerned that diagnostics showing a good posterior fit could be driven by a far-from-optimal classifier. My intuition for why this should be a hard problem is as follows: for _any_ amortized posterior $q(\theta | x)$, if we can learn a Bayes-optimal classifier $d^*$, then one could recover the true posterior for any $x$ via $p(\theta | x) = q(\theta | x) / (1-d^*(\theta, x))$. As a result, learning $d^*$ for _any_ $q$ is equivalent to learning the true posterior. This is somewhat borne out in the third column of figure 2, where one needs roughly the same number of samples to learn a good posterior as one needs in order to learn a good classifier. I'm wondering if somehow the proposed approach is essentially equivalent to training two different amortized variational posteriors on independent data and then seeing how well they match. Some further simulations showing that one problem is easier than the other would alleviate some of my concerns. I should also say that I think that even if my concerns are valid, the diagnostic is still interesting and useful.
* I felt that the benchmarking could have been improved slightly. In particular, some empirical relationship between the proposed measure of fit and existing measures would be nice. For example, the authors could take the true posterior in a toy model where it is known, generate $q$ by perturbing that posterior in a systematic way and compare how increasingly large perturbations affect both the L-C2ST measure as well as things like KL, reverse KL, TV, and so on. The results in Figures 1 and 2 hint at this, but there are a lot of moving parts (especially in Figure 2) making it difficult to determine whether to contribute power (or lack thereof) to failure of the SBI (i.e., $q$ is bad) or the classifier (i.e. $r$ is good).
* A minor technical point throughout the proofs of the theorems is that they rely on the null hypothesis $\mathcal{H}_0$ holding on a set of strictly positive measure. That is, if $\mathcal{H}_0$ fails on a set of measure zero then the proofs would break down. In particular, in that case $\mathcal{A}$ would have measure zero and the final integral after line 454 would be exactly zero. I tend to feel that failures on sets of measure zero have no practical implications, and so this is a minor point. But, if one is going to state theorems and proofs, it would be good to be rigorous.
* Similarly, Lemma 1 should not be phrased in terms of "points" as those would have zero measure, and instead should say "If there exists a set $\mathcal{A} \subseteq \mathcal{S}$ with positive measure such that $p(\theta) > q(\theta)$ for all $\theta \in \mathcal{A}$, then there exists a set $\mathcal{B}$ with positive measure such that $p(\theta') < q(\theta')$ for all $\theta'\in \mathcal{B}$.
* Finally, there are similar (and equally minor) issues appearing in the proofs of Theorems 2 and 3.
Typos:
* Line 103: "may lack of" --> "may lack"
* Line 240: "in average" --> "on average"
* Line 251: Missing period before "The local PP-plot"
Technical Quality: 2 fair
Clarity: 4 excellent
Questions for Authors: * Using the same $X_n$'s in Algorithm 1 for draws from $p(\cdot | X_n)$ and $q(\cdot | X_N)$ introduces dependencies in the training data for the classifier. Can that somehow be leveraged when training the classifier (similar to how in linear regression, we would want to perform GLS instead of OLS to increase the power when data points have shared structure like this). Along similar lines, if it is much cheaper to sample from $q(\theta | x)$ than it is to sample from $p(x, \theta)$, then could it help to somehow incorporate more samples from $q(\theta | X_n)$ in the algorithm (or perhaps the whole distribution somehow)? Obviously this would require accounting for the dependencies (perhaps via some kind of downweighting) in the training of the classifier. This is obviously not necessary to implement for this paper, but I am curious to hear the authors thoughts.
* Algorithm 1 also seems computationally intensive due to the permutation step. Would it be possible to do something like split-sample conformal inference to speed up the p-value calculations? Again, obviously not necessary for the paper, I'm just curious.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 4 excellent
Contribution: 3 good
Limitations: I don't see any potential for direct societal harm from this work. The authors may wish to include a limitations section that highlights failure modes of their method.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for saying that our paper is well written and clearly explained. We also appreciate that he found our method "simple and elegant" and recognized that we aim at an important practical question for simulation-based inference. Here below we address some of the remarks and questions raised by the reviewer, which were very interesting and pointed towards exciting perspectives that we shall investigate in the near future.
**Remark 1: My major concern is that I am not totally convinced that learning the proposed classifier is any easier than learning the variational posterior.**
This is an interesting and true remark. I am not sure whether one tool could be considered as strictly easier than another. The results in Figure 2 of the initial submission indeed show that the sample-sizes are roughly the same for the `Two Moons` task. For the `SlCP` task however, the oracle C2ST statistic never reaches close-to-zero values, implying that much more samples (N_train) are needed for the posterior estimator to converge, than for $\ell$-C2ST to reach maximum TPR (N_cal). Results for additional benchmarks can be found in Figure 1 of the attached PDF: for the `Gaussian Linear Uniform` example classification appears harder than posterior estimation, for the `Bernoulli GLM` task however the posterior estimator appears to converge at $N_{\mathrm{train}}=10^5$ (see oracle C2ST curve in Column 1), while the maximum TPR is reached for $N_{\mathrm{cal}}=10^4$ (see $\ell$-C2ST-NF curve in Column 3).
Being concerned about false conclusions due to a far-from-optimal classifier is totally justified. One should always make sure the classifier is “good enough” before using it as a diagnostic tool. If it’s not, using another diagnostic tool would be better. One way to do this would be to use cross-validation as a tool to select the classifier with highest accuracy for the observed data (the accuracy should never be exactly at 0.5) . Note, however, that the MSE test statistic for $\ell$-C2ST is defined by the predicted class probabilities and not the accuracy of the classifier. Therefore one should also check how well the classifier is calibrated.
We have thought about this aspect of training a second posterior estimate instead of using a classifier to assess the consistency of the first posterior estimator. However, we decided that this would be redundant and preferred using two different tools to play against each other. Furthermore, the choice of a binary classifier was not really motivated by the “easy to learn” aspect, but mainly the popularity of binary classification, the fact that it’s easy to understand and has a much richer and stable literature than deep generative models. We think that this way our method will be more convincing for people from many different fields, instead of just the sbi / bayesian inference community.
**Remark 2: I felt that the benchmarking could have been improved slightly. In particular, some empirical relationship between the proposed measure of fit and existing measures would be nice (...).**
We duly note this remark and thank the reviewer for pointing it out. Indeed, it would be nice to have some easier to understand and interpretable experiments, where less factors have an influence on the results. Unfortunately we did not have the time to design a new experiment.
**Remark 3: A minor technical point throughout the proofs of the theorems is that they rely on the null hypothesis holding on a set of strictly positive measure (...)**
We will correct the demonstrations in our camera ready version. We thank the reviewer for pointing this out!
**Question 1: Using the same $X_n$'s in Algorithm 1 for draws from $p(.|X_n)$ and $q(.|X_n)$ introduces dependencies in the training data for the classifier. Can that somehow be leveraged when training the classifier (...). Along similar lines, if it is much cheaper to sample from $q(\theta|x)$ than it is to sample from $p(x, \theta)$, then could it help to somehow incorporate more samples from $q(\theta|X_n)$ in the algorithm (or perhaps the whole distribution somehow)? (....)**
Using the same $X_n$’s for draws from $p$ and $q$ is a sensible and interesting topic. In Appendix A.7 in the initial submission we address this and show that in this case the theoretical cross-entropy loss is the same as if the $X_n$’s were drawn independently.
However, we have thought about adding more samples from $q$ to improve our method as a follow-up to this work. When considering Normalizing Flows as posterior estimators, we would even have access to the likelihood. That could definitely help. My concern is that the classification problem becomes unbalanced and we might overfit on data from $q$ w.r.t to data from $p$. Interestingly the recent work Discriminative Calibration from Yoa and Domke (2023) proposes an extension of C2ST for SBI that tries to leverage additional information of $q$, but without being *local* (main contribution of $\ell$-C2ST).
**Question 2: Algorithm 1 also seems computationally intensive due to the permutation step. Would it be possible to do something like split-sample conformal inference to speed up the p-value calculations?**
This is a very interesting question. We have taken a look at conformal inference to include a calibration step in the predict-prob from the classifiers. Indeed, using a technique with finite-sample guarantees based on ordered statistics could probably allow us to avoid the rather time consuming permutations in the null hypothesis. However, it should be noted that conformal prediction is known to have difficulties in providing intervals with fixed conditional coverage, so for our local procedure this could be a problem. We will take a further look on this in the near future.
**Limitations:** The authors may wish to include a limitations section that highlights failure modes of their method.
We will try to include one in the final camera ready version of this article.
---
Rebuttal Comment 1.1:
Comment: Thank you very much for the detailed response. I found the paper interesting and enjoyable, but I stand by my initial score. | Summary: The authors present a new diagnostic tool for simulation-based inference. The method learns the C2ST without knowledge of the true posterior distribution. If the density estimator is a normalizing flow, the authors propose to perform the classification in latent space.
Strengths: **Originality**: The method tackles and important issue for simulation-based inference. The method is new and potentially very useful. The extension for normalizing flows is interesting and creative. If L-C2ST works well, I could see this be adopted by the community.
**Quality**: The method is rigorously derived.
**Clarity**: Everything is explained well and with sufficient detail. Figures are clear.
Weaknesses: **Quality**:
My main concern with this paper is that it does not sufficiently demonstrate how well the method actually works. In particular, I believe that the following crucial questions remain unanswered:
- Does the method indeed capture the true C2ST for different `x`?
For example, the (decent) TPRs shown in figure 2 could also be obtained if L-C2ST converges onto the average C2ST across x_o. The central claim of the authors is that L2CT is local, but this claim needs substantially more empirical evidence.
To address this concerns, I would (for example) suggest to add a scatter plot of L-C2ST vs true C2ST on a task where the ground truth is available by MCMC or analytically
- How well does the method scale with parameter and data dimensionality
Since C2ST is trained on theta and x as input, I could imagine it to not scale well to high-D x or theta (or require **even more** simulations). All tasks in the paper are low-D though. I suggest that the authors add an additional benchmark task with higher data and parameter dimensionality.
- Is the method actually better than local-HPD
On one task, L-C2ST performs better than local-HPD and worse on another one. Which method should be preferred? The paper also claims that local-HPD is much less efficient than C2ST but this is never empirically shown. In particular, while a naive implementation of local-HPD might be slow, amortizing over the confidence alpha should be trivial and make local-HPD as fast as C2ST. Please correct me if I am wrong or, otherwise, clarify this in the paper.
To address this, I recommend adding an analysis of the runtimes of the algorithms and ideally have more than only two benchmark tasks.
I believe that the above concerns are critical to the usefulness of the method, but I would be willing to significantly increase my score if these things are addressed.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: - Do I understand correctly that, for large N_H, one has to train the classifier multiple times? If yes, couldn’t this make the method less efficient than local-HPD?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: The authors state limitations of their method.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for recognizing the potential and usefulness of the method that we propose, as well as appreciating its extension to the case of normalizing flows. We also thank the reviewer for the very deep and interesting questions related to our method. We have followed the reviewer's suggestions and have added results on more SBI benchmark examples, as well as new experiments for analyzing our results (cf. Figures 1 and 2 in the attached PDF). These examples are described and the results summarized in the global response to all reviewers.
**Question 1: Does the method indeed capture the true C2ST for different x?**
Figure 2 in the submitted paper compares our method to the oracle C2ST, but only in terms of statistical power, as we limited the *local* analysis to the averaged results over 10 different reference observations. Following the reviewer's suggestion, we've added scatter plots to examine how the values of the $\ell$-C2ST(-NF) test statistic correlate with those from the oracle C2ST. The results are presented in Figure 2.a of the attached PDF for the same 10 reference observations $x_o$ initially used. To allow for more robust conclusions, we've also included scatter plots using 100 reference observations (cf. Figure 2.b). Please note that we have also considered additional benchmark examples.
Overall, the scattered points are not too far from the diagonal, which indicates that the test statistics for $\ell$-C2ST(-NF) correlate quite well with those from the oracle C2ST. Also, as $N_{train}$ increases, the scattered points become closer to zero and are more concentrated. This result comes with no surprise: when the posterior approximate is consistent, the test statistics should be close to zero. If the approximate is poorly estimated, the statistics should deviate from zero and the points can take on different values. We observe that the -NF version seems to be slightly closer to the oracle C2ST than the normal $\ell$-C2ST.
In examples `Two Moons`, `Gaussian Linear Uniform`, and `Bernoulli GLM`, the scatterplots display points that are close to the diagonal. In tasks `SLCP` and `Bernoulli GLM Raw`, most scattered points are below the diagonal, especially for higher $N_{train}$ values. This means that $\ell$-C2ST has lower test static values, meaning that the null hypothesis will likely not be rejected: it is, therefore, less sensitive to local differences between $p$ and $q$ than the oracle C2ST, which is consistent with Figure 1 of the attached PDF. Intuitively, this can be explained by the fact that $\ell$-C2ST is trained on the joint pdf and is thus less precise.
The results for the `Gaussian Mixture` task however deviate from the general trend, as in Figure 1 of the attached PDF. Maybe due to big variability in the *local* consistency of $q$. Unlike the true C2ST, $\ell$-C2ST is trained on the joint data space and could therefore overfit on the "bad" observations, resulting in higher test statistics for observations where the true C2ST statistic would be small.
**Question 2: How well does the method scale with parameter and data dimensionality**
Please see our response to Question 1 from Reviewer `Esye`.
In our initial submission, the performance of C2ST on the SBI tasks that we considered was mainly impacted by the structure of the parameter space and the corresponding shape of the posterior distribution, since the examples were of rather low dimensionality. We have added more SBI tasks with varying sizes of observation/parameter space to give a clearer picture of the performance of our method.
It should be noted that local-HPD performs significantly worse in medium dimensions (cf. `Bernoulli GLM` or even `SLCP`) than in low dimensions (`Gaussian Mixture` and `Two Moons`), though this could be because of the complex posterior structure. Also, $\ell$-C2ST-NF scales well to the high-dimensional observation space of `Bernoulli GLM`, while *local*-HPD significantly loses statistical power.
**Question 3: Is the method actually better than local-HPD?**
First of all, it is important to mention that having uniform HPD-values is not a sufficient condition for asserting the null hypothesis of consistency. This is mentioned by Zhao et al. (2021) (see end of section 3.3) and is a clear disadvantage compared to our proposal, which provides a necessary and sufficient proxy for inspecting local posterior consistency.
Furthermore, the HPD methodology summarizes the whole information concerning $\theta$ into a single scalar, while in $\ell$-C2ST we handle the $\theta$-vector in its multivariate form. In high $\theta$-dimensions (`Bernoulli GLM`) or for complex posterior distributions (`SLCP`), such summarized information might discard too much information and not be enough to satisfactorily assess the consistency of the posterior estimator. Indeed, the only task where local-HPD outperforms $\ell$-C2ST is `Gaussian Mixture` (low dimension and gaussian posterior).
Finally, as mentioned by the reviewer, local-HPD in its naive implementation is much less efficient than $\ell$-C2ST (see Appendix A.5 in submitted paper). A new version of *local*-HPD with amortized $\alpha$ has recently been proposed by DEY et al. (2023). However, we were not able to verify how well it works in time for the submission deadline.
**Question 4: For large N_H, one has to train the classifier multiple times? (...) less efficient than local-HPD?**
N_H is the number of times we compute the test statistic under the null hypothesis in order to compute p-values. The number of classifiers we need to train depends on how many we need to compute the test statistic ($N_H$ for $\ell$-C2ST vs. $N_H \times n_{\alpha}$ for local-HPD). In summary, if $\ell$-C2ST is more efficient in computing a single test statistic, it will also be more efficient to compute $N_H$ test statistics.
---
Rebuttal Comment 1.1:
Title: Concerns on accuracy of l-C2ST
Comment: Thanks a lot for the detailed response and for the additional experiments, they helped in clarifying Q2, Q3 (although I still think that the gains over HPD are rather small), and Q4.
Q1: However, I am worried about the results shown in the new pdf. In Figure 2, for any given training budget N_train, the oracle C2ST seems to correlate very weakly (if at all) with l-C2ST. How, then, should l-C2ST indeed be a **local** measure for calibration?
---
Reply to Comment 1.1.1:
Comment: Thank you for your response.
**General remark on the performance of $\ell$-C2ST w.r.t. *local*-HPD:**
Our goal with $\ell$-C2ST is to present a new, **alternative** method to *local*-HPD, which to our knowledge is the only other existing local diagnostic (please see **Remark 1** in the response to Reviewer `Ppab`, for further motivations regarding our choice to base our work on **C2ST**). Our research and experiments show that $\ell$-C2ST is theoretically valid and works on several datasets, sometimes even outperforming *local*-HPD. It is true that our method does not work as well on all examples, but a big advantage of $\ell$-C2ST is that one can directly use literature and advancements from the *classification* field in order to adapt and enhance it for any given dataset/task (e.g. see response to **Question 1** for Reviewer `Ppab`). This makes $\ell$-C2ST a competitive alternative with great potential.
**Regarding the accuracy of $\ell$-C2ST and it actually being a **local** method:**
- The method is based on solid mathematical reasoning and it is **local by definition**. Moreover, it has **if and only if** guarantees for consistency, a remarkable property for this kind of statistical test, which *local*-HPD does not have.
- The suggested experiment with the scatterplots showing the "accuracy" of $\ell$-C2ST are really interesting! Our results in the attached PDF show that there is some correlation between the values of test statistics for $\ell$-C2ST and oracle-C2ST. This correlation becomes weaker when $N_{\mathrm{train}}$ becomes larger, since the test statistics in these cases tend to zero and can start to be confused with noise. We have carried out standard tests for the statistical significance of the correlation indices between the scores. The results are shown at the end of this comment.
- The fact that $\ell$-C2ST does not track the exact values of the oracle C2ST on all examples and for every observation does not mean necessarily that it is not a good method for **detecting local posterior inconsistencies**. Our results on Figure 3 and Figure 4 (those on the `JRNMM` data) clearly show how the statistical test varies for each choice of observation $x_{\mathrm{o}}$. Also, it should be noted that while we want to be as close as possible to the oracle C2ST, the more adapted performance metric, measuring the capacity of **detecting local inconsitencies**, is the statistical error of the test (i.e. power and type 1 error): Figure 2 (resp. 1) in the main paper (resp. attached PDF) show that in more than one example, $\ell$-C2ST reaches maximum power and outperforms *local*-HPD.
**P-values of the Pearson test of non-correlation between the oracle C2ST and the $\ell$-C2ST( / -NF) MSE test statistic**.
Obtained for 100 observations (as plotted in Figure 2.b.) using `scipy.stats.pearsonr`. Bold values indicate the cases for which the Pearson test rejects the null hypothesis of non-correlation with 95% confidence:
`Two Moons`:
- $N_{\mathrm{train}}=100$: **10e-27** / **10e-9**
- $N_{\mathrm{train}}=1000$: **0.0006** / 0.19
- $N_{\mathrm{train}}=10000$: **10e-16** / **10e-11**
- $N_{\mathrm{train}}=100000$: 0.052 / **10e-5**
`SLCP`:
- $N_{\mathrm{train}}=100$: **0.0001** / 0.12
- $N_{\mathrm{train}}=1000$: **0.0009** / **0.03**
- $N_{\mathrm{train}}=10000$: 0.31 / 0.82
- $N_{\mathrm{train}}=100000$: 0.21 / 0.40
`Gaussian Mixture`:
- $N_{\mathrm{train}}=100$: **10e-8** / **10e-12**
- $N_{\mathrm{train}}=1000$: **10e-7** / **0.01**
- $N_{\mathrm{train}}=10000$: **0.006** / 0.35
- $N_{\mathrm{train}}=100000$: **10e-14** / **0.006**
`Gaussian Linear Uniform`:
- $N_{\mathrm{train}}=100$: **10e-13** / **10e-12**
- $N_{\mathrm{train}}=1000$: 0.07 / **10e-9**
- $N_{\mathrm{train}}=10000$: 0.42 / **0.002**
- $N_{\mathrm{train}}=100000$: 0.68 / 0.87
`Bernoulli GLM`:
- $N_{\mathrm{train}}=100$: **10e-8** / **10e-5**
- $N_{\mathrm{train}}=1000$: **10e-10** / **0.0002**
- $N_{\mathrm{train}}=10000$: 0.67 / 0.18
- $N_{\mathrm{train}}=100000$: 0.39 / 0.31
`Bernoulli GLM Raw`:
- $N_{\mathrm{train}}=100$: **0.03** / 0.37
- $N_{\mathrm{train}}=1000$: **10e-8** / **0.0004**
- $N_{\mathrm{train}}=100$: **0.0001** / **0.04**
- $N_{\mathrm{train}}=100$: 0.92 / 0.06 | Summary: An algorithm for evaluating amortized posterior estimators $q(\theta|x)$ in simulation-based inference (SBI) is proposed. In the setting considered, we have the ability to sample $p(\theta)$ and to sample $p(x|\theta)$ but not to evaluate its density. The algorithm modifies the common classifier two-sample test by learning to discriminate between samples $(\theta,x)$ sampled ancestrally (though simulation) from $p(\theta,x)$ and those where $\theta$ is then resampled from $q(\theta|x)$. An extension to normalizing flows is also considered, where the discriminator works in the latent space rather than in the theta space. There are strong results on several SBI benchmarks and real-world problems in terms of statistical power and runtime.
Strengths: Please note that this is a relative outsider's point of view, as I do not directly work on SBI.
I find the paper well-written and was able to understand the main points and the math. In particular, the first few pages are a good introduction to the problem area that will be accessible to any reader who is familiar with Bayesian inference and hypothesis testing.
The results and their significance are explained well and give intuitions about why the proposed algorithm works well on the chosen problems.
The provided code is a helpful addition.
Weaknesses: I do not see any major weaknesses, but have a few questions.
- Could you please comment on the scalability of the algorithm? What difficulties do you foresee when scaling to high-dimensional theta or problems when the prior is wide? (Same question about high-dimensional observations. Is more risk of overfitting to the class split of $x$?)
- It would be interesting to understand better the benefit of the -NF version of the algorithm beyond the reasons described in L180, which seem to explain only computation cost improvements. Is it expected that the classification boundary is smoother in the latent space than in the theta space? Maybe this could be illustrated, e.g., on the two moons example.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: Please see above.
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 3 good
Limitations: yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for finding our paper "clear and well-written", as well for having checked our code. We now answer the two main questions raised by the reviewer. Please note that we have added results on more SBI benchmark examples that you will find in Figures 1 and 2 of the PDF attached to the rebuttal. These examples are described and the results summarized in the global response to all reviewers.
**Question 1: Scalability of the algorithm to high-dimensional theta or x**
An obvious answer would be to say that in higher dimensions more samples are needed: larger $N_{\mathrm{cal}}$ for $\ell$-C2ST(-NF) to converge to the oracle C2ST. In our setting, this would mean better convergence to the *optimal Bayes classifier*, which is known to be the same for $\ell$-C2ST and oracle C2ST. Note, however, that the MSE test statistic for $\ell$-C2ST is defined by the predicted class probabilities and not the accuracy of the classifier. Therefore, adding a regularizer or maybe a calibration step (e.g. using methods from the conformal inference literature) might be necessary when working with data defined in high dimensions, so as to prevent overfitting and overconfident predictions.
In the specific case of SBI, the dimension of the parameter space ($m$) is typically of order $10^0$ to $10^1$ and $m \approx 10^2$ is already often considered as high dimensional. The observation space, however, can be high-dimensional (e.g. time-series), but summary statistics are often used to reduce the dimension of the observations to the order of $d \approx 10^1$.
In our initial submission, we show results for rather low-dimensional tasks ($m=2, d=2$) for Two-Moons and $m=5, d=8$ for `SLCP`). They illustrate how our method behaves in terms of the difficulty of the inference task (e.g. complex posteriors as in `SLCP`). To demonstrate how the results change with respect to the dimensionality of the data, we've added more tasks of low and medium dimensionality: `Gaussian Mixture` ($m=2, d=2$) `Gaussian Linear Uniform` ($m=10, d=10$) and `Bernoulli GLM` ($m=10, d=10$). To analyze how our method scales to **high dimensional observation spaces** only (without parameter-space / task variability), we’ve also added the `Bernoulli GLM Raw` task. It considers *raw* observation data ($d=100$), as opposed to *sufficient summary statistics* in `Bernoulli GLM` ($d=10$). We refer the reader to the global response for all reviewers for a description of all benchmark examples (in terms of dimensionality, posterior structure, challenges) and a summary of all experimental results.
In the attached PDF , Figure 1 shows results obtained for the new benchmarks, therefore extending Figure 2 of the initial submission. We can see in Column 3 of both Figures that for $\ell$-C2ST to converge to the oracle C2ST (at maximum power $\mathrm{TPR} = 1$), less samples are needed in the low-dimensional `Two Moons` and `SLCP` tasks than for the medium-dimensional `Bernoulli GLM` task: $N_{\mathrm{cal}} \approx 2000$ vs. $N_{\mathrm{cal}} \approx 5000$. This confirms our intuition stated above.
Note that we did not include the `Gaussian Mixture` and `Gaussian Linear Uniform` tasks in this analysis, as they are not comparable to the other tasks: the TPR of $\ell$-C2ST at $N_{\mathrm{train}}=1000$ is smaller than 1 (see Column 2). The classification task is thus harder and more samples are required to converge to reach maximum TPR: the oracle C2ST now requires $N_{\mathrm{cal}}=2000$ and *local* methods never reach maximum TPR (see Column 3). Here, the difficulty of the classification task has more impact on statistical power than the dimensionality of the task (e.g. the convergence to maximum TPR is slower in `Gaussian Mixture` than in `Bernoulli GLM` of higher dimension).
Interestingly, we observe in the `Bernoulli GLM Raw` task, that $\ell$-C2ST-NF scales well to the high-dimensional observation space (faster convergence to maximum TPR compared to the `Bernoulli GLM` task), while the normal $\ell$-C2ST and *local*-HPD significantly lose in statistical power.
**Question 2: Benefit of the -NF version**
The numerical illustrations in our manuscript indicate that the -NF version of our statistical test works better when the (true) posterior distribution of the model is "more complicated" than a gaussian distribution. This is the case for the `Two Moons` and `SLCP` tasks: the posterior distributions are globally multi-modal and locally structured (cf. task descriptions in the global response to all reviewers). We observe in Column 3 of Figure 2 in the main paper that the -NF version requires **less samples** (i.e. lower $N_{\mathrm{cal}}$) to reach maximum power/TPR. This is also the case for the additional `Bernoulli GLM` task (see Column 3 of Figure 1 in the attached PDF). In contrast, the additional `Gaussian Mixture`and `Gaussian Linear Uniform` tasks, where the posterior is a gaussian distribution, the normal $\ell$-C2ST is as powerful or even better than its -NF counterpart (see Column 3 of Figure 1 in the attached PDF).
We have also observed that $\ell$-C2ST-NF yields test statistics that correlate more closely with the oracle C2ST in situations where the true posterior distribution can be sampled with MCMC. The experiments and illustrations related to this finding can be found in Figure 2 of the attached PDF and are described in our response to Reviewer `ukES02`.
Finally, we refer the reader to the previous question to point out an interesting observation: for the `Bernoulli GLM`, the -NF version scales much better to high dimensional observation spaces than the normal $\ell$-C2ST. Note that this does not allow us to make any general conclusions, but it might be worth further investigating this result. | null | null | Rebuttal 1:
Rebuttal: We thank the reviewers for their thorough reading of our manuscript and their very interesting questions. We have had very positive remarks concerning the clarity of our text and the potential of our method to the SBI community. We are very grateful for all this. Some interesting questions that were raised concerned how our method behaves when the dimensionality of the data increases, whether the test statistics of $\ell$-C2ST really capture the true test statistics (i.e. those from the oracle C2ST) for different values of conditioning observation $x$, and whether our method is indeed superior to local HPD. We address all these questions (and some more) below.
We have followed suggestions and have added results on further examples from the SBI benchmark. In the attached PDF, Figure 1 extends Figure 2 of the submitted version, with the intent of better validating our method on varying dimensions for observations and parameters. We have also included results for a new experiment in Figure 2 of the PDF, showing the correlation between the test statistics of $\ell$-C2ST(-NF) and those from the oracle C2ST for different conditioning observations.
**Description of the used SBI benchmark examples from `sbibm` and summary of main results:**
`Two Moons`:
- **Dimensions ($\theta, x$):** (2,2)
- **Posterior structure:** bi-modal, crescent shape
- **Challenge:** globally and locally structured
- **$\ell$-C2ST(-NF) vs. local-HPD** (power, Figure 1): faster convergence to max power, but worse for high $N_{train}$
- **$\ell$-C2ST-NF vs. L-C2ST** (power, Figure 1): better (less samples for higher power)
- **$\ell$-C2ST(-NF) vs. C2ST** (relative position to diagonal, Figure 2): good
`SLCP`:
- **Dimensions ($\theta, x$):** (5,8)
- **Posterior structure:** 4 symmetrical modes
- **Challenge:** posterior designed to be complex
- **$\ell$-C2ST(-NF) vs. local-HPD** (power, Figure 1): better
- **$\ell$-C2ST-NF vs. L-C2ST** (power, Figure 1): better
- **$\ell$-C2ST(-NF) vs. C2ST** (relative position to diagonal, Figure 2): lower
`Gaussian Mixture`:
- **Dimensions ($\theta, x$):** (2,2)
- **Posterior structure:** 2D gaussian
- **Challenge:** one of the gaussians in the mixture has much broader covariance than the other.
- **$\ell$-C2ST(-NF) vs. local-HPD** (power, Figure 1): slightly worse
- **$\ell$-C2ST-NF vs. L-C2ST** (power, Figure 1): same
- **$\ell$-C2ST(-NF) vs. C2ST** (relative position to diagonal, Figure 2): lower, then higher
`Gaussian Linear Uniform`:
- **Dimensions ($\theta, x$):** (10,10)
- **Posterior structure:** multivariate Gaussian
- **Challenge:** scaling to higher dimensions
- **$\ell$-C2ST(-NF) vs. local-HPD** (power, Figure 1): similar
- **$\ell$-C2ST-NF vs. L-C2ST** (power, Figure 1): worse
- **$\ell$-C2ST(-NF) vs. C2ST** (relative position to diagonal, Figure 2): good
`Bernoulli GLM`:
- **Dimensions ($\theta, x$):** (10,10)
- **Posterior structure:** unimodal, concave
- **Challenge:** scaling to higher dimensions
- **$\ell$-C2ST(-NF) vs. local-HPD** (power, Figure 1): better
- **$\ell$-C2ST-NF vs. L-C2ST** (power, Figure 1): better
- **$\ell$-C2ST(-NF) vs. C2ST** (relative position to diagonal, Figure 2): good / slightly lower
`Bernoulli GLM Raw`:
- **Dimensions ($\theta, x$):** (10,100)
- **Posterior structure:** unimodal, concave
- **Challenge:** Scaling to high dimensional observation spaces
- **$\ell$-C2ST(-NF) vs. local-HPD** (power, Figure 1): better
- **$\ell$-C2ST-NF vs. L-C2ST** (power, Figure 1): better
- **$\ell$-C2ST(-NF) vs. C2ST** (relative position to diagonal, Figure 2): slightly lower
Pdf: /pdf/27033635d5ce9db2162929d768e60aad2d1d9708.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Lookaround Optimizer: $k$ steps around, 1 step average | Accept (poster) | Summary: The paper presents Lookaround, a novel optimizer for weight average ensembling (WA). Unlike existing approaches that perform weight averaging post-training, Lookaround adopts a two-step process throughout the training period. In each iteration, the "around" step trains multiple networks simultaneously on transformed data using different augmentations, while the "average" step combines these networks to obtain an averaged network as the starting point for the next iteration.The approach demonstrates clear superiority over state-of-the-art methods in extensive experiments on CIFAR and ImageNet datasets using both traditional CNNs and Vision Transformers. The paper provides theoretical justification and commits to open science by making the code publicly available.
Strengths: 1. `Innovative Approach`: The introduction of Lookaround as a straightforward and effective optimizer for weight average ensembling brings a novel perspective to the field. The two-step process during training enhances network diversity and preserves weight locality, addressing the limitations of post-hoc weight averaging approaches.
2. `Theoretical Justification`: The paper offers strong theoretical support for the superiority of Lookaround through convergence analysis. Proposition 1 provides insights into the convergence variance of Lookaround in comparison to typical SGD and Lookahead optimizers. This rigorous theoretical analysis enhances the credibility and significance of the proposed method, demonstrating a solid foundation for its effectiveness and performance.
3. `Extensive Experimental Validation`: The authors conduct extensive experiments on popular benchmarks, including CIFAR and ImageNet, with both traditional CNNs and Vision Transformers (ViTs). The clear superiority of Lookaround over state-of-the-art methods on these datasets demonstrates its effectiveness and applicability.
4. `Commitment to Open Science`: The authors state their intention to make the code publicly available, fostering reproducibility and enabling further research in the field.
Weaknesses: 1. `Computational Complexity and Comparison`: The paper lacks a comprehensive discussion of the computational complexity introduced by the Lookaround optimizer. As Lookaround incorporates an additional around step during training, it is crucial to assess its computational requirements and compare them to existing optimization methods. A detailed analysis of the computational trade-offs, including runtime and memory usage, would provide a more comprehensive understanding of Lookaround's practical applicability and scalability.
2. `Limited Experiments on Datasets Beyond Image Classification`: The paper primarily focuses on evaluating Lookaround on image classification tasks using datasets like CIFAR and ImageNet. However, to establish the versatility and effectiveness of Lookaround, it is important to explore its performance on datasets beyond image classification. Conducting experiments on diverse tasks such as language modeling or segmentation/detection tasks would demonstrate the generalizability of Lookaround across various domains and provide a more comprehensive evaluation of its performance.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: 1. With the Lookaround optimizer requiring k around steps and 1 average step, I am unclear about how the epoch and iteration are defined in this setting. Are they counted based on the average step or the around step? Could you provide clarification on how the epoch and iteration are structured in Lookaround?
2. Considering that each batch is augmented k times in Lookaround, I am curious to understand how this approach differs from simply applying `repeated augmentations`[A] to each batch. What distinguishes the effect of k augmentations and 1 step update in Lookaround versus performing each step update individually and then averaging all checkpoints? In other words, what advantages does Lookaround's approach provide in terms of diversity and ensemble performance compared to the alternative method of averaging checkpoints from individual updates?
[A] Hoffer E, Ben-Nun T, Hubara I, et al. Augment your batch: Improving generalization through instance repetition[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2020: 8129-8138.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations: As discussed in the limitation section "additional cost of network searching using different data augmentation, resulting in a longer training time proportional to the the number of trained networks", the running time is a important concern.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your encouraging words and constructive comments. We sincerely appreciate your time in reading the paper, and our point-to-point responses to your comments are given below.
**Q1: How are the epoch and iteration defined in Lookaround's setting?**
Under the standard training process, an epoch means the model fully traverses the origin dataset once, typically consisting of $N$ batches. However, in the Lookaround setting, an epoch represents that the model fully traverses the dataset $d$ times, equivalent to $d*N$ batches. This is because each batch of data undergoes $d$ different data augmentations and is trained by the model. Lookaround, $k$ steps around, 1 step average. One iteration means that each branch of Lookaround does an around step. We will modify it in the pseudo-code section of the paper to make it easier to understand.
**Q2: Lookaround versus repeated augmentations.**
Thanks for mentioning this wonderful work. While our method does share some similarities with it, there are also significant differences. Firstly, the motivations behind both approaches are distinct. The central idea of the paper is to use multiple data augmentations within a single batch to average gradients, hoping to achieve a more stable gradient representation. In contrast, the goal of Lookaround is to increase the diversity of the model in the loss landscape. It performs gradient descent separately under different data augmentations $k$ times, followed by an average once. From this perspective, "Augment your batch" can be seen as a special case of the Lookaround method where $k$ is set to 1.
**Q3: Computational Complexity and Comparison.**
The Lookaround optimizer can be viewed as maintaining multiple backups of model weights during the optimization process. Each backup is only trained under its corresponding data augmentation strategy (forward and backward propagation) and updates the parameters using weight averaging after multiple iterations.
In general, if the time for an iteration is defined as $\Omega$, and the time to perform a weight average is $\omega$, given that the dataset contains $B$ batches, the standard time complexity of SGDM to complete an epoch is $O(B\Omega)$. The time complexity of Lookaround is $O(dB\Omega+\frac{B\omega}{d})$. Given that $\omega$ is much smaller than $\Omega$, this can be approximated as $O(dB \Omega)$.
In our experiments (corresponding to Table 1 to 4 in the paper), to establish fair comparisons in computational among different approaches, we adopt the same augmentations for the competitors. Specifically, within a single epoch, both the proposed Lookaround and the competitors undergo training on an identical d times the data augmentations. With such a setup, we guarantee consistency in the data volume utilized by each method, thereby ensuring fair comparisons in terms of computation.
The following is an example of the running time of different methods in our experiments. We run each method for 50 epochs and calculate the average execution time of ResNet50 for one epoch on a 3090 graphics card on CIFAR100. The results are shown in Table S1.
Table S1: Average execution time of each method traversing an epoch on ResNet50 on CIFAR100.
| Method | SGDM | Lookahead | AdamW | SAM | Lookaround |
| :------------: | :--: | :-------: | :---: | :--: | :--------: |
| Execution Time ($s$) | 123 | 123 | 125 | 246 | 123 |
When considering memory usage, we can divide it into two parts: one part is generated by the model parameters, and the other part is produced by the intermediate results during the model training process, where the latter occupies the main part of the memory. In terms of memory usage for model parameters, the memory occupied by parameters in the Lookaround method is d times that of SGDM. As for the intermediate results produced by the computational graph during model training, since it only updates on the corresponding weight backup and does not increase with the number of data augmentation strategies, its memory usage is the same as conventional methods (such as SGDM).
**Q4: Limited Experiments on Datasets Beyond Image Classification.**
We really appreciate your suggestions. It's a great idea to apply Lookaround to broader areas including language modeling and segmentation/detection tasks. In this work, we focuses on visual classification. The broader application are left to our future work. Thank you again for your kind suggestions positive comment on our work.
---
Rebuttal Comment 1.1:
Title: Response to the author
Comment: I greatly appreciate your feedback and the clarification you've provided. I am content with the overall quality of the paper and would like to vote for a "Strong Accept" (8).
---
Reply to Comment 1.1.1:
Comment: We express our gratitude to the reviewer for your positive feedback. We appreciate your valuable comments and we will carefully revise the paper based on your comments. | Summary: This work provides a new optimizer, "Lookaround optimizer," which is built upon a previous proposed "Lookahead optimizer" [40]. By incorporating data augmentation, this work shows an improved convergence rate under low condition numbers. It also empirically shows some improvement in classification accuracy under several datasets.
Strengths: This work proposes a new optimizer and shows consistent (despite little) improvement in various settings. The supplementary material shows a solid derivation.
Weaknesses: 1. The major concern lies in the limited improvement over the "Lookahead optimizer." The improvement in Figure 2 and Table 1-3 all seems to be subtle. Since the proposed methods see more augmented data in each epoch than others, it is questionable whether the improvement actually comes from these additionally augmented data. Also, the idea itself is almost identical to the Lookahead optimizer, so the novelty is rather limited.
2. Minor mistakes: In table 1, row CIFAR 10, column ResNext50 Top5 results, the best results happen in Lookahead (99.96), not the proposed method (99.95).
Technical Quality: 4 excellent
Clarity: 3 good
Questions for Authors: What is the d (number of data augmentation) used in the experiment? I cannot find it in the context.
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 3 good
Contribution: 2 fair
Limitations: The author addresses that the data augmentation process consumes extra time, which is likely to be the main limitation.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your encouraging words and constructive comments. We sincerely appreciate your time in reading the paper, and our point-to-point responses to your comments are given below.
**Q1:What is the d (number of data augmentation) used in the experiment?**
In the experiments corresponding to Table 1, Table 2, and Table 4, we adopted a setting with $d=3$. The three data augmentation methods are RandomHorizontalFlip, RandomVerticalFlip, and RandAugment. This information can be found in lines 221-223 of the paper and also in the appendix C.
**Q2: Whether the improvement in accuracy comes from these additionally augmented data.**
We place great emphasis on experimental settings, ensuring that the improvements observed in the experiments are primarily attributed to our method and not just to data augmentation. Specifically, within a single epoch, both the proposed Lookaround and the competitors undergo training on an identical $d$ times the data augmentations. With such a setup, we guarantee consistency in the data volume utilized by each method, thereby ensuring fair comparisons in terms of computation.
The table below presents the experimental results of various networks on the CIFAR100 dataset. In this context, H(data), V(data), and R(data) each represent specific data augmentation applied to the original data. H(data) + V(data) + R(data) signifies training for one epoch after combining these three types of data augmentations. Throughout the 200 epochs of experimentation, Lookaround, a variant of SGDM, demonstrated significant performance improvements.
Table S1: Top-1 accuracy of different networks under CIFAR100. Here "H" denotes random horizontal flip data augmentation. "V" denotes random vertical flip data augmentation. "R" denotes RandAugment data augmentation.
| Per epoch data | Method | Vgg19 | ResNeXt50 | ResNet50 | ResNet101 | ResNet152 | Execution Time | amount of data |
| :-------------------------: | :---------------: | :---: | :-------: | :------: | :-------: | :-------: | :------------: | :------------: |
| H(data) + V(data) + R(data) | SGDM | 73.84 | 79.10 | 79.61 | 79.91 | 80.16 | 3x | 3x |
| H(data) + V(data) + R(data) | Lookaround | 74.29 | 81.14 | 81.60 | 81.97 | 82.22 | 3x | 3x |
**C3: Minor mistakes.**
Thank you very much for pointing out this issue. We will fix it in the revised version.
---
Rebuttal Comment 1.1:
Comment: Thanks to the author for the detailed responses that alleviate my concern on whether the improvement in accuracy comes from these additionally augmented data. However, for me, the novelty of the idea seems rather limited as it is almost identical to the Lookahead optimizer. Thus, I prefer to remain the rating as "borderline accept."
---
Reply to Comment 1.1.1:
Title: Thanks for your reply!
Comment: Thank you for your feedback and the positive rating given to our work. We would like to provide further clarification regarding the differences between Lookaround and Lookahead as follows.
Despite their similar names, Lookaround and Lookahead are fundamentally distinct in their spirts and methodologies. The Lookahead optimizer combines fast and slow weights along the training trajectory to identify a lower loss basin. Intuitively, the algorithm chooses a search direction by looking ahead at the sequence of “fast weights" generated by another optimizer. However, Lookaround adopts an iterative weight average strategy throughout the whole training period to constantly make the diversity-locality compromise. The terms "foward, back" in Lookahead title and "around, average" in Lookaround title succinctly encapsulate their distinctions in both spirts and approaches. Furthermore, our work theoretically demonstrate that the proposed Lookaround can converge to a lower expected loss and a smaller steady-state variance, which means that the model is more stable and has a better ability to resist noise. At the same time, Lookaround also shows certain advantages in terms of convergence speed. These findings are further supported by extensive and comprehensive experiments, which validate the superior performance of Lookaround compared to the previous Lookahead method. At the same time, we observe that Lookaround has smaller oscillation and higher accuracy on the convergence curve of the test set, which further demonstrates our theory.
We sincerely look forward to your reevaluation of our work and would greatly appreciate it if you could raise your score to boost our chance of greater exposure within the community. Thank you very much! | Summary: This paper introduces a new optimization algorithm named Lookaround, which draws inspiration from the recent achievements of weight averaging (WA) techniques in deep learning. The proposed Lookaround optimizer looks around nearby points by performing multiple gradient computations for a given training input using different data augmentations and averages them to obtain better generalizing solutions.
Strengths: The theoretical analysis in this paper largely relies on Zhang et al. (2019), including noisy quadratic analysis and deterministic quadratic convergence. While it does not introduce a novel form of analysis, it is significant as it establishes a theoretical foundation within the existing framework.
For experiments, the baseline comprises commonly employed optimization techniques, such as SWA (Izmailov et al., 2018) and SAM (Foret et al., 2021), which are widely accepted as standard approaches in the field. In addition to convolutional neural networks of various scales, the authors also considered experiments on vision transformers.
Weaknesses: Experimental issues:
* __There are no error bars.__
While the authors answered "yes" for "Error Bars", the paper only provide a single value across the tables and figures. It is unclear how many experiments were conducted to derive those values. It would be preferable if the authors also included averages accompanied by standard deviations to provide a more comprehensive representation of the experimental results.
* __The outcomes obtained from the ImageNet experiments appear to be strange.__
Despite the authors' efforts to demonstrate the scalability of the proposed algorithm, the Top-1 accuracy of 72.27% reported in Table 2 appears to be comparatively low. After reviewing Appendix C.1.2, it seems that the experimental setup follows the PyTorch convention (https://github.com/pytorch/examples/tree/main/imagenet), except for some additional augmentations. It is widely recognized that the ResNet-50 model typically achieves an accuracy of approximately 76% on the ImageNet dataset using this standard setup, which significantly differs from the 72.27% accuracy reported by the authors. It would be beneficial to provide clarification on the reasons behind the considerable performance drop observed in the ImageNet results.
Practical issues:
* __Excessive training costs incurred by the proposed algorithm.__
The Lookaround algorithm, as proposed, demands $d$ times the number of forward and backward passes for each optimization step. This is considerably higher compared to SAM (Foret et al., 2021), which only requires twice the number of passes. One could argue that the training epoch is effectively enlarged by a factor of $d$ and this is the actual reason for the performance improvements. It would be valuable to present results akin to Table 2 in Foret et al. (2021), that is, exploring the impact of increasing the number of training epochs through an ablation study.
Technical Quality: 1 poor
Clarity: 2 fair
Questions for Authors: There are several issues in the experimental results of the current version of the paper as I mentioned above, and I would respectfully suggest the followings that could make the paper solid:
* __The absence of SAM in Table 4.__
Is there a specific reason for excluding SAM from the results presented in Table 4?
* __The disparity in results for ResNet50 on CIFAR100 between Tables 1 and 6.__
Table 1 presents Top-1/5 accuracies of 81.60/95.99, while these values are not reflected in Table 6.
* __Top-5 accuracy does not provide a meaningful comparison in the main tables.__
It is suggested to exclude the Top-5 accuracy metric from the main results presented in Tables 1 and 4 for CIFAR10/100. Specifically, in the case of CIFAR-10, evaluation metrics where all techniques achieve 99.9% accuracy hold little significance. Instead, including the Top-5 accuracy in Table 2 would be more appropriate, as it provides a more meaningful performance evaluation for the ImageNet experiments.
* __It would be nice to provide uncertainty metrics.__
While the Top-1 accuracy is the main evaluation metric for classification, it would be advantageous to incorporate additional metrics that can effectively demonstrate the superiority of the proposed algorithm. For instance, including the negative log-likelihood (NLL) as a measure of in-domain uncertainty would provide valuable insights into the algorithm's performance.
* __It would be beneficial to offer an analysis of the training costs involved.__
Several algorithms, including the proposed one, impose additional computational burdens, which could impact their practicality (e.g., SAM requires performing double forward and backward passes; SWA requires an additional model copy in memory). Providing detailed information about the specific computational requirements of each algorithm would greatly benefit future researchers and engineers. The most straightforward analysis is measuring wall-clock time for training. It would be nice to replace the meaningless "#param." column with the training runtime in Table 4.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 1 poor
Presentation: 2 fair
Contribution: 2 fair
Limitations: The authors stated the additional training cost incurred by the proposed algorithm.
---
__References:__
Zhang et al. (2019). _Lookahead optimizer: k steps forward, 1 step back_.
Izmailov et al. (2018). _Averaging weights leads to wider optima and better generalization_.
Foret et al. (2021). _Sharpness-aware minimization for efficiently improving generalization_.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your constructive comments. In the following, your comments are first stated and then followed by our point-by-point responses.
**Q1: There are no error bars.**
The error bar (i.e., standard deviations) is depicted in Figure 6 in our paper. The detailed results are provided in Table S1. We will make it clearer in the revised version.
Table S1: Top-1 accuracy of Lookaround (with standard deviation).
| Method | Dataset|VGG19|Resnet50|Resnet101|Resnet152|ResNext50|
| :--------: | :------: | :---------------: | :---------------: | :---------------: | :---------------: | :---------------: |
| Lookaround | CIFAR10 | 94.42 ($\pm$0.09) | 96.56 ($\pm$0.02) | 96.77 ($\pm$0.08) | 96.94 ($\pm$0.12) | 96.67 ($\pm$0.11) |
| Lookaround | CIFAR100 | 74.2 ($\pm$0.25) | 81.12 ($\pm$0.63) | 81.89 ($\pm$0.53) | 81.99 ($\pm$0.24) | 81.17 ($\pm$0.51) |
**Q2: The outcomes obtained from the ImageNet experiments appear to be strange.**
For the sake of fair comparisons in training data and computation cost, each method (including Lookaround and the competitors) was trained with the same data augmentations (random horizontal flip, random vertical flip, and RandAugment). With such a setup, we guarantee consistency in the data volume utilized by each method, thereby ensuring fair comparisons in terms of computation. The divergence in performance observed between Lookahead, as reported in the original paper and the current study, can be attributed to the differing data augmentation techniques employed.
**Q3: Excessive training costs incurred by the proposed algorithm.**
In order to compare the performance of Lookaround and standard SGDM training more fairly, we shorten the number of epochs of Lookaround training to 1/4 of SGDM. As shown in Table S2, Lookaround's performance is still competitive.
Table S2: Comparison between SGDM and Lookaround with ResNet50 on CIFAR100. Here "H" denotes random horizontal flip data augmentation. "V" denotes random vertical flip data augmentation. "R" denotes RandAugment data augmentation.
| Method | Data Augmentation Strategy | Epoch | CIFAR10 | CIFAR100 | Training time |
| :--------: | :------------------------: | :---: | :-----: | :------: | :-----------: |
|SGDM|"R"|200|95.57|78.59|1 Budget|
|SGDM|"H" + "V" + "R"|200|95.96 |79.61|3 Budget|
| Lookaround |"H" + "V" + "R"|50|95.82 |78.62|3/4 Budget|
| Lookaround |"H" + "V" + "R"|200|**96.59** |**81.60**|3 Budget|
**Q4: The absence of SAM in Table 4.**
We supplement the SAM experiments of ResNet50 and ViT-B/16 under pre-training on Table S3. In the pre-training experiment, Lookaround and SAM significantly improve the standard training process, among which SAM is more suitable for ViT architecture, while Lookaround is more suitable for CNN architecture. At the same time, the two are orthogonal, and combining the two can achieve higher performance improvement.
Table S3: Top-1 accuracy of SAM and Lookaround on CIFAR dataset under pre-training.
|Backbone |Method| CIFAR10 | CIFAR100 |
|:------: | :------------: | :-------: | :-------: |
|ResNet50 |SGDM|96.08|82.04|
|ResNet50 |SAM|97.02|82.75|
|ResNet50 |Lookaround|96.79|83.62|
|ResNet50 |Lookaround+SAM|**97.53**|**83.91**|
|ViT-B/16 |Adam|92.91|74.50|
|ViT-B/16 |SAM|98.02|89.13|
|ViT-B/16 |Lookaround|95.23|78.38|
|ViT-B/16 |Lookaround+SAM|**98.84**|**92.04**|
**Q5: The disparity in results for ResNet50 on CIFAR100 between Tables 1 and 6.**
The result differences of ResNet50 on CIFAR100 in Table 1 and Table 6 come from the different settings of $k$. In Table 1, the choice of $k$ is 5, meaning an average is taken every 5 batches. In Table 6, the parameter $k$ we chose is "one-epoch", indicating that in Lookaround, different branches train for a complete epoch each before performing a weight averaging. Thank you very much for pointing this out, we supplement this experiment in Table S4.
Table S4: Top-1 accuracy (%) of different data augmenta-
tion (DA) number by using ResNet50 on CIFAR100 dataset.
|# of DA|1|2|3|4|5|6|
|:-------:|:--:|:--:|:--:|:--:|:--:|:--:|
|Top-1 (%)|78.2|80.82|81.60|81.19|81.74|82.02|
|Top-5 (%)|94.5|95.19|95.99|95.65|95.85|96.02|
**Q6, Q7: Top-5 accuracy does not provide a meaningful comparison in the main tables. It would be nice to provide uncertainty metrics.**
Thank you very much for your suggestion. We will revise the paper accordingly following your suggestions. We present the NLL loss for the test set for part of the experiment, and the data in this table correspond to the CIFAR100 dataset in Table 1 in the paper.
Table S5: NLL Loss values of different methods under ResNet family.
|Method|ResNet50|ResNet101|ResNet152|ResNext50|
|:--------:|:-------:|:-------:|:-------:|:-------:|
|SGDM|0.899|0.894|0.895|0.820|
|SWA|0.858|0.856|0.856|0.796|
|Lookahead|0.836|0.825|**0.809**|0.787|
|Lookaround|**0.81**|**0.823**|0.823|**0.73**|
**Q8: It would be beneficial to offer an analysis of the training costs involved.**
In the experiment of comparing Lookaround with other optimization methods, we train each method with the same number of epochs, and the corresponding data of each epoch comes from the sum of $d$ times of augmentation data. Therefore, non-SAM methods will have the same number of forward and back propagation, while SAM requires twice as many forward and back propagation. Although each optimizer will update momentum or average weight, the time consumption of this step in the overall training is very small.
Experimentally, we run each method for 50 epochs and calculate the average execution time of ResNet50 for one epoch on a 3090 graphics card on CIFAR100. The results are shown in Table S6.
Table S6: Average execution time of each method traversing an epoch on ResNet50 on CIFAR100.
|Method| SGDM | Lookahead | AdamW | SAM | Lookaround |
| :------------: | :--: | :-------: | :---: | :--: | :--------: |
| Execution Time ($s$)|123|123|125|246|123|
---
Rebuttal Comment 1.1:
Comment: Thank you for the author's effort.
> __Q2: The outcomes obtained from the ImageNet experiments appear to be strange.__
>
> For the sake of fair comparisons in training data and computation cost, each method (including Lookaround and the competitors) was trained with the same data augmentations (random horizontal flip, random vertical flip, and RandAugment). With such a setup, we guarantee consistency in the data volume utilized by each method, thereby ensuring fair comparisons in terms of computation. The divergence in performance observed between Lookahead, as reported in the original paper and the current study, can be attributed to the differing data augmentation techniques employed.
The checkpoint `ResNet50_Weights.IMAGENET1K_V1` from torchvision indicates a 76.13% accuracy using basic training receipt involving the SGD optimizer. However, the performance detailed in the current manuscript is significantly below expectations, despite using nearly identical settings (90 training epochs; MultiStepLR scheduler with decay steps at 30th and 60th epochs and a decay factor of 0.1; initial learning rate of 0.1 and a batch size of 256). This substantial difference in results might arise from flaws in the experimental configuration, raising concerns about the reliability of the experimental outcomes presented in the paper. Consequently, additional clarifications of why it happens are imperative to address this issue.
---
Reply to Comment 1.1.1:
Comment: Dear Reviewer,
we sincerely appreciate your prompt feedback on our rebuttal, and we deeply apologize for the delay in our response, which was due to the extensive clarification experiments conducted on Imagenet. We would now like to provide further clarification as follows.
As we have previously mentioned in our rebuttal, in order to ensure fair comparisons in terms of computation, we have employed the same multi-augmentation techniques for all methods. Additionally, we have incorporated a one-epoch warm-up phase, as detailed in Appendix C.1.2, for evaluating all methods in our experiments. This warm-up phase has proven effective in all our CIFAR experiments, and thus, we maintained its inclusion in the ImageNet experiments. However, after conducting extensive and rigorous experiments, we discovered that the warm-up phase did not yield optimal results when combined with multi-augmentation on ImageNet. Detailed results can be found in Table S7. Consequently, by removing the warm-up phase, all our experiments on the competitors achieved performance comparable to the results published in prior works. Notably, the proposed Lookaround once again outperformed these baseline methods.
We sincerely apologize for any confusion caused by our experimental results. We firmly believe that a superior optimizer should consistently yield improved results across various settings, rather than being limited to specific settings. The proposed Lookaround demonstrates superior performance in both our original settings (with warm-up) and the revised settings (without warm-up), further confirming the efficacy of our method.
We will updata these results in the revised version. Thank you for your understanding and consideration.
Best regards,
The authors of Lookaround
Table S7: Comparison between SGDM and Lookaround with ResNet50 on ImageNet. Here "H" denotes random horizontal flip data augmentation. "V" denotes random vertical flip data augmentation. "R" denotes RandAugment data augmentation.
| Method | Data Augmentation Strategy | Warm Up | Top-1(%) | Top-5(%) |
| :--------: | :------------------------: | :------: | :------: | :------: |
|SGDM|"H" + "V" + "R"| ✓ |72.27 |90.99|
|Lookaround |"H" + "V" + "R"| ✓ |75.11|92.43|
|SGDM|"H" + "V" + "R"| ✗ |75.97 |92.89|
|**Lookaround** |"H" + "V" + "R"| ✗ |**77.32**|**93.29**|
---
Reply to Comment 1.1.2:
Comment: Dear reviewer,
We sincerely thank you for your constructive comments.
As suggested, we have conducted comprehensive experiments on ImageNet, comparing the performance of SWA, SGDM, Lookahead, and Lookaround. The results, as presented in Table S8, again demonstrate that Lookaround achieves notable performance improvements when no warm-up setting is employed.
Table S8: Comparison between SGDM, SWA, Lookahead and Lookaround with ResNet50 on ImageNet. Results of SWA$^+$, Lookahead$^+$ are taken from the origin paper.
| Method | Warm Up | Top-1(%) | Top-5(%) |
| :--------: | :------: | :------: | :------: |
|SGDM| ✗ |75.97 |92.89|
|SWA| ✗ |76.78 |93.18|
|SWA$^+$| ✗ |76.97 |-|
|Lookahead| ✗ |76.52 |93.11|
|Lookahead$^+$| ✗ |75.49 |-|
|**Lookaround** | ✗ |**77.32**|**93.29**|
The allocated time for reviewer-author discussion is nearing its end, we sincerely look forward to your reevaluation of our work and would greatly appreciate it if you could raise your score to boost our chances of gaining more exposure in the community. Thank you very much!
Best regards,
The authors of Lookaround | Summary: This paper proposes Lookaround, a new optimization method that incorporates weight averaging into the optimization process. The algorithm consists of two steps: 1) the around step launches several parallel runs of gradient descent led by different data augmentations, 2) the average step does weight averaging of the networks obtained by these parallel runs. These two steps are repeated along the whole training process, with the result of the average step being a starting point for the next around step. The proposed method is similar to Stochastic Weight Averaging and Model Soups, though it does the averaging during training rather than at the end of training. Lookaround shows strong results compared to the existing optimization methods (SGD with momentum, AdamW, Lookahead, SAM).
Strengths: 1. The idea of using weight averaging during optimization with various augmentations is novel and leads to a better generalization of neural networks.
2. This paper has a theoretical analysis of the quadratic noise setup which demonstrates faster convergence of Lookaround compared to SGD and Lookahead.
Weaknesses: Despite the fact that the paper proposes a novel and interesting method, in my opinion, it is not quite ready for publishing in its current form. I believe that addressing the following concerns could help to make it a much stronger submission.
1. The text of the paper is hard to follow, contains some vague parts and numerous typos.
- The theoretical reasoning is heavily inspired by the Lookahead paper, which makes it impossible to understand the theory without reading the original paper (e.g., I could not understand Proposition 1 and the definition of $\alpha$ without reading the Lookahead paper). Moreover, there is no remark that equations 4 and 5 are derived in the Lookahead paper.
- Line 5-6: weight averaging and ensembles are mixed up.
- Line 313: it is not clear how gradient boosting is related to this setup.
2. I have found two contemporaneous works published on arxiv (https://arxiv.org/abs/2302.14685, https://arxiv.org/abs/2304.03094) that propose approaches very similar to Lookaround. I believe they need to be at least discussed in the related work section.
3. The experimental part of the paper raises some questions and is not convincing enough, in my opinion.
- The main results (Table 1) lack standard deviations, even though they are claimed in the OpenReview form.
- The baselines seem too weak, i.e., the Lookahead paper reports >75% ImageNet test accuracy on ResNet-50 for SGD baseline, while this paper shows 72.27%.
- The augmentation policy during baseline training is not clear. If the same set of augmentations as for Lookaround is used, then this may be the reason for the bad quality of the baselines.
- Despite the paper proposing a new optimization method, it lacks experiments on a wider choice of architectures. Table 1 illustrates only VGG-19 and four networks from the ResNet family. It would be beneficial to add experiments on more modern convolutional architectures and more extensive experiments on image transformers.
- The paper lacks a fair comparison to logit ensembles. If I understood correctly, different models of the ensemble are trained with different Lookaround augmentations. This leads to the suboptimal quality of each model and, as a result, to a suboptimal ensemble.
- The comparison to model soups in Appendix D is not fair as well, i. e., the original paper fine-tunes models with various hyperparameters and does a greedy search for the combination with the best validation accuracy. Near zero accuracy of the average of $\theta_1$ and $\theta_2$ may be due to improper fine-tuning hyperparameters. Moreover, this is an important baseline, and it would be better to move this comparison to the main part of the paper.
- A proper ablation study of augmentations is required. Is it possible to train Lookaround without augmentations for the around step “branches” (e.g., the only source of randomness is batch ordering)? What if each branch utilizes the same set of augmentations? Why are the mentioned augmentations used (and not some other ones, e. g. vertical flip seems to be a strange augmentation)? Probably, some of these setups are covered in Section 4.4, but it is not clear from the text.
- Top-5 accuracy is redundant and makes it difficult to read the tables.
4. Minor issues/typos:
- Line 25: the sentence is about LMC, but the citation [7] leads to the paper about permutations (Entezari et al., 2022).
- Line 112-113: we provide
- Line 147-148: repetition
- Line 156: we analyze
- Line 162: it seems that it should be $\mathbb{E} [c_i^2]$
- Line 242: CIFAR
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: 1. What is the augmentation policy during baseline training?
2. I do not understand the experimental setup in the ablation study (Section 4.4). Is the training without DA and WA similar to the regular training without augmentations? Is the training without DA but with WA similar to Lookaround, where the ordering of batches is the only source of randomness between branches?
3. Section 3.2.2, convergence on deterministic quadratic functions. How branches of Lookaround are different from each other if there is no randomness in gradient steps?
4. Could the authors elaborate on how the Lookaround method relates to the methods proposed in the contemporaneous works (https://arxiv.org/abs/2302.14685, https://arxiv.org/abs/2304.03094)?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: The main limitation of the proposed method is the increased training budget, which is highlighted by the authors. However, there is no fair comparison to networks trained for a larger number of epochs.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your detailed comments. We hope the following response will address your concerns.
**C1: Two contemporaneous works propose approaches very similar to Lookaround.**
Thanks for sharing the two great works! After carefully reading the two papers, we found these three works share moderate similarities, meanwhile with some notable differences:
**Methodolegy.** DART starts weight averaging until the second half training process. PAPA do weight averaging more frequently like Lookaround throughout the training process. However, PAPA combines the weight of the sub-model with the average weight. The weighting factor varies in different training stages. Such a strategy is more sophisticated, yet introduces more hyperparameters, which increases the complexity of the method. Lookaround adopts a consistent weight averaging strategy throughout the training process, making it easier to use in practice.
**Theory.** DART proves that the average while training is more robust from the perspective of noise feature. Lookaround proves that it can get lower expected loss under the setting of quadratic noise function. PAPA mainly focuses on empirical validation, without any theoretical analysis.
**Experiments.** PAPA mainly makes comparisons with Model Soups. DART validate its effectiveness in domain generalization. Lookaround is tested in the senarios of finetuning and training from scratch.
In conclusion, although sharing a similar essence, three methods differ substantially in theoretical foundations, experimental approaches, and method instantiations. They complement one another and each offers noteworthy contributions to the research community.
**C2: The experimental part of the paper raises some questions.**
**The standard deviations**. The error bar (i.e., standard deviations) is depicted in Figure 6 in our paper. The error bar of Lookaround trained from scratch are provided in Table S1. We will make it clearer in the revised version.
Table S1: Top-1 accuracy of Lookaround (with standard deviation).
|Method|Dataset|VGG19|Resnet50|Resnet101|Resnet152|ResNext50|
|:-:|:-:|:-:|:-:|:-:|:-:|:-:|
|Lookaround|CIFAR10|94.42 ($\pm$0.09)|96.56 ($\pm$0.02)|96.77 ($\pm$0.08)|96.94 ($\pm$0.12)|96.67 ($\pm$0.11)|
|Lookaround|CIFAR100|74.2 ($\pm$0.25)|81.12 ($\pm$0.63)|81.89 ($\pm$0.53)|81.99 ($\pm$0.24)|81.17 ($\pm$0.51)|
**The augmentation policy and the ImageNet accuracy**: For the sake of fair comparisons in training data and computation cost, each method (Lookaround and its competitors) was trained with the same data augmentations (random horizontal flip, random vertical flip, and RandAugment). With such a setup, we guarantee consistency in the data volume utilized by each method, thereby ensuring fair comparisons in terms of computation. The divergence in performance of Lookahead can be attributed to the differing data augmentation techniques employed.
**More modern architectures**: We have tested the proposed Lookaround optimizer with diverse and popular model architectures including VGG19, ResNet, ResNext50 and ViT-B/16. We believe these representative achitecture are sufficient to validate the effectiveness of the proposed method across architectures.
**Fair comparison to logit ensembles:** We have tried different training strategies for the base models in Logit Ensemble, including training different models with different augmentations, and training all the base models with the same mixed augmentations. In our experiments, no matter how the base models are trained, Lookaround exhibit consistently superior performance to logit ensembles. We will revise the paper to make this point clearer.
**The comparison to model soups:** Here we would like to remind the reviewer that Lookaround is proposed for a quite different problem setting compared to Model Soups. Model Soups is designed for more efficient utilization of existing models, assuming that these models have already been pre-trained on specific datasets. Lookaround, on the other hand, serves as an optimizer for training deep models, devoid of any assumptions regarding the initialization of the model parameters. This distinction is why we included the comparison with Model Soups in the supplementary material. In terms of fairness, we have conducted experiments using both greedy search and uniform search. The results are presented in Table S2, where Lookaround outperforms the greedy Model Soups, further validating the effectiveness of our approach.
Table S2: Comparison between Lookaround and Model Soups. "M=18" represents 18 different hyperparameters in Model Soups. "M=3" represents 3 different data augmentation techniques.
|Optimizer|Method (M=18)|CIFAR100|
|:-:|:-:|:-:|
|SGDM|Model Soups (Greedy)|82.07|
|SGDM|Model Soups (Uniform)|1.00|
|AdamW|Model Soups (Greedy)|79.18|
|AdamW|Model Soups (Uniform)|78.28|
|**Lookaround**|Lookaround (M=3)|**83.62**|
**Ablation study**: In Section 4.4 of the paper we set up the exact experiment you're talking about. The only WA experiment represents that each branch of Lookaround utilizes the same set of augmentation. We use the random vertical flip method as it can bring more image diversity, which is conducive to the guide model to have the characteristics of rotation invariance. We will specify these differences in the revised version.
**C3: How branches of Lookaround are different from each other if there is no randomness in gradient steps?**
This issue can be referred to in Appendix B.2. Here, we have made an approximate calculation. The idea behind Lookaround is to choose models from different branching points in the loss landscape for weight averaging. In this deterministic quadratic function, we chose model points corresponding to every point in the $k$-step trajectory for weight averaging.
**C4: The text of the paper is hard to follow.**
We are sorry for any confusion caused by our writing. We will fixed the issues you mentioned in our revisions.
---
Rebuttal 2:
Comment: Dear reviewer,
We sincerely appreciate the questions you posed during the initial rebuttal phase. In response to your first rebuttal, we have clarified that our experiments were conducted with strict fairness and will provide a more detailed explanation of the theory in the revised version, addressing your concerns with care. As we near the midpoint of the author-reviewer discussion stage, we look forward to receiving more feedback and engaging in productive conversations with you.
Best regards,
The authors of Lookaround
---
Rebuttal Comment 2.1:
Comment: Thank you for the response and additional clarifications.
Could you please elaborate more on the following points:
1. **ImageNet experiments.** Could you please provide results for the baselines other than SGDM for the ImageNet experiments without warmup?
2. **Model Soups.** Your results for model soups do not match the original model soups paper and the DiWA paper [1] (and also my own experience): uniform model soups should work much better than random. Could you please explain what are the reasons for such different behavior in your opinion? I suspect the reason is the poor choice of the fine-tuning procedure, which goes too far from the pre-trained checkpoint. Moreover, this procedure seems to lose the benefits of pre-training since it shows much worse results than the ones in the literature (84.5% in [2] and 86.4% in [3] v.s. yours 82%). Also, did you update batch norm parameters for model soups?
3. **Logit Ensembles.** Your results for logit ensembles are quite surprising, hence, I am not fully convinced by the experiments on one dataset. Did you make similar experiments on other datasets? Did you try to compare Lookaround to an ensemble of SWA models?
[1] Ramé et. al. Diverse Weight Averaging for Out-of-Distribution Generalization. NeurIPS 2022. \
[2] Kornblith at. al. Do Better ImageNet Models Transfer Better? CVPR 2019. \
[3] Grill et. al. Bootstrap Your Own Latent A New Approach to Self-Supervised Learning. NeurIPS 2020.
---
Reply to Comment 2.1.1:
Title: Reply to the second comment [1/2]
Comment: We appreciate your feedback on our rebuttal. Please find our responses to your questions below.
> Could you please provide results for the baselines other than SGDM for the ImageNet experiments without warmup?
Thanks for your comment. We have conducted comprehensive experiments on ImageNet, comparing the performance of SWA, SGDM, Lookahead, and Lookaround. The results, as presented in Table S3, again demonstrate that Lookaround achieves notable performance improvements when no warm-up setting is employed.
Table S3: Comparison between SGDM, SWA, Lookahead and Lookaround with ResNet50 on ImageNet. Results of SWA$^+$, Lookahead$^+$ are taken from the origin paper.
| Method | Warm Up | Top-1(%) | Top-5(%) |
| :--------: | :------: | :------: | :------: |
|SGDM| ✗ |75.97 |92.89|
|SWA| ✗ |76.78 |93.18|
|SWA$^+$| ✗ |76.97 |-|
|Lookahead| ✗ |76.52 |93.11|
|Lookahead$^+$| ✗ |75.49 |-|
|**Lookaround** | ✗ |**77.32**|**93.29**|
> Your results for model soups do not match the original model soups paper and the DiWA paper [1] (and also my own experience): uniform model soups should work much better than random.
Here we would like to remind the reviewer that uniform soups do not always improve model performance, which is also acknowledged in the original Model Soups paper (Please refer to Table J.1 in the original paper). It is mentioned in Appendix J.1 of Model Soups [1] that the learning rate lower than 1e-4 is easy to lead to the failure of weight average. This can be a reason for the failure of uniform soups under SGDM and the success of AdamW. However, as an optimizer that is sensitive to learning rates, SGDM's fine tuning performance is very poor at low learning rates. Thus we don't get a well-behaved uniform soups.
> Could you please explain what are the reasons for such different behavior in your opinion? I suspect the reason is the poor choice of the fine-tuning procedure, which goes too far from the pre-trained checkpoint.
The success of uniform soups in Model Soups heavily relies on stringent conditions. Specifically, the individual sub-models within uniform soups must remain within the same low-loss basin to ensure effective weight averaging and maintain high accuracy. However, when an excessive number of models is utilized, and in the presence of varying hyperparameter configurations, the use of data augmentation as a strong hyperparameter change can easily cause certain models to deviate and enter different low-loss basins, resulting in the failure of weight averaging. Consequently, achieving success with uniform soups requires careful adjustment of hyperparameter differences.
This is precisely where the significance of our work lies. With the proposed Lookaround method, the weights are continuously averaged throughout the training process, ensuring their alignment within the same loss basin and convergence towards flatter minima.
> Moreover, this procedure seems to lose the benefits of pre-training since it shows much worse results than the ones in the literature (84.5% in [2] and 86.4% in [3] v.s. yours 82%)
Here we would like to, again, remind the reviewer that Lookaround is proposed for a quite different problem setting compared to Model Soups. Model Soups is designed for more efficient utilization of existing models, assuming that these models have already been pre-trained on specific datasets. Lookaround, on the other hand, serves as an optimizer for training deep models, devoid of any assumptions regarding the initialization of the model parameters.
Regarding the difference between our ResNet50 results on CIFAR100 and the results in [2] and [3], the reason is that both [2] and [3] conducted about 50 grid searches on learning rate and weight decay for the experiment, while we didn't.
> Also, did you update batch norm parameters for model soups?
Following the setting in the original Model Soups paper and code, we did not make additional updates to the batch norm parameters. | null | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: This article introduces the Lookaround Optimizer, a novel optimization algorithm for deep neural networks. The Lookaround Optimizer is based on the idea of lookaround, which involves maintaining two sets of weights. The first set of weights is updated using the standard gradient descent algorithm, while the second set of weights is updated using the averaged gradients of the first weight. The Lookaround Optimizer has been shown to improve the generalization performance of deep neural networks, and it outperforms other state-of-the-art optimization algorithms on a variety of benchmark datasets. The authors also provide theoretical analysis of the Lookaround Optimizer. Overall, the Lookaround Optimizer is a promising approach for improving the training of deep neural networks.
Strengths: - Lookaound Optimizer performs better than other state-of-the-art optimization algorithms (Lookahead, SAM) on multiple benchmark datasets.
- The proposed optimizer seems to be model-free and can be applied to various computer vision scenarios.
- The paper is well-written and easy to understand.
Weaknesses: - While the authors have demonstrated fascinating performance on benchmarks, the technical innovation of this paper is limited. Similar ideas that utilize multiple models have already in proposed in varioius area, especially in meta learning.
- With that being said, while the authors have mentioned the connections to these related topics, they did not compare with some of them. For example, I think at least the authors should compare the performance of Lookaround Optimizer with Model Soups (i.e., train multiple copies of models using the same initialization, the same order of data, but with different augmentation), and other model merging techniques like Model Ratatouille.
- The training time will be a bottleneck for applying this method as the authors acknowledged.
- Another limitation is that, the augmentation seems to be a must for this method. Therefore, it seems that this method is not applicable (at least not straightforward) when we try to apply it on other modality, such as language and speech.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: See above
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: The authors have addressed the limitation.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your constructive comments and suggestions. In the following, your comments are first stated and then followed by our point-by-point responses.
**Q1: Similar ideas that utilize multiple models have already in proposed in varioius area, especially in meta learning.**
Thanks for the feedback. Broadly speaking, the goal of meta-learning is to train a model on a diverse range of learning tasks, such that it can quickly solve new learning tasks. To the best of our knowledge, the closest work with the proposed Lookaround is perhaps the MAML-based approaches. However, Lookaround distinguishes itself significantly from MAML in at least the following two aspects:
* **Problem Settings.** MAML is motivated by learning the common knowledge across tasks, such that new tasks can be quickly solved. In other words, MAML typically assumes the availability of massive tasks for training, with the ultimate goal of solving new tasks with only a few instances. Lookaround, like SGD, Lookhead, or some other optimizers, focuses on addressing general optimization problems.
* **Methodologies.** MAML trains the initial parameters of the model to maximize its performance on a new task after updating the parameters through one or more gradient steps, utilizing a small amount of data from that particular task. In contrast, Lookaround simultaneously trains multiple models on different augmentations and periodically averages their weights to obtain a final model with enhanced generalization.
**Q2: Compare the performance of Lookaround optimizer with Model Soups and Model Ratatouille.**
We greatly appreciate the reviewer's suggestion. Here we provide the experimental results, details and discussions as follows.
Table S1: Comparisons between Lookaround, Model Soups, and Model Ratatouille on ResNet50. "M=18" represents 18 different hyperparameters. "M=3" represents that Lookaround is trained using 3 different data augmentation techniques.
| Optimizer | Method (M=18) | CIFAR100 |
| :------------: | :-------------------------: | :-------: |
| SGDM | Model Soups (Greedy) | 82.07 |
| SGDM | Model Soups (Uniform) | 1.00 |
| AdamW | Model Soups (Greedy) | 79.18 |
| AdamW | Model Soups (Uniform) | 78.28 |
| AdamW | Model Ratatouille (Greedy) | 80.31 |
| AdamW | Model Ratatouille (Uniform) | 1.58 |
| Lookaround | Lookaround (M=3) | **83.62** |
**Experimental Details**:18 different configurations come from the combinations of three data augmentations (RandomHorizontalFlip, RandAugment, AutoAugment), three initial learning rates: ($\{0.2, 0.1, 0.05\}$ for SGDM, $\{5e^{-5},2e^{-5},e^{-5}\}$ for AdamW) and label smooth or not. For Model Ratatouille, we choose Stanford cars196, Flowers102, and CIFAR10 as auxiliary domains, and CIFAR100 as the target domain for further fine-tuning.
**Result Discussions**: From Table S1, it can be easily seen that Lookaround *outperforms Model Soups and Model Ratatouille* significantly. Another notable result is that both Model Soups and Model Ratatouille easily break down in uniform averaging strategies (Model Soups with SGDM achieves $1\%$, and Model Ratatouille with AdamW achieves $1.58\%$). In contrast, Lookaround periodically averages the weights during the entire training process, which effectively bypasses the pitfalls of weight averaging between independently trained models.
While Lookaround outperforms Model Soups and Model Ratatouille, it is still necessary to note that *Lookaround is proposed for a quite different problem setting compared to the other two methods*. Model Soups and Model Ratatouille are designed for more efficient utilization of existing models, assuming that these models have already been pre-trained on specific datasets. Lookaround, on the other hand, serves as an optimizer for training deep models, devoid of any assumptions regarding the initialization of the model parameters.
**Q3: The training time will be a bottleneck for applying Lookaround.**
We agree that it is imperative to establish equitable comparisons in computational analysis among different approaches, which is precisely what we've accomplished. Given that the proposed Lookaround utilizes $d$ times the amount of data, we took care that in our primary experiments, all other methods were also trained using $d$ times the data volume. Specifically, within a single epoch, both the proposed Lookaround and the competitors undergo training on an identical $d$ times the data augmentations. With such a setup, we guarantee consistency in the data volume utilized by each method, thereby ensuring fair comparisons in terms of computation.
In the limitation section, we mentioned that Lookaround is limited by the additional cost, considering that Lookaround usually needs more training time to converage than its competitors. However, if restricted to the same training budget, Lookaround still outperforms the others even it have not reached its own optimum. We will provide further clarity on this aspect in the revised paper.
**Q4: It seems that this method is not applicable on other modality.**
We wholeheartedly agree that exploring the application of Lookaround in other modalities, such as NLP, holds great promise. Indeed, there are existing methods of data augmentation in language tasks, such as EDA[1], TTA[2]. While it may not be a direct and straightforward adaptation, the underlying idea of Lookaround can be explored and applied in innovative ways to these modalities. We eagerly anticipate future advancements and the emergence of new approaches that harness the concept of Lookaround in diverse modalities.
[1] Easy Data Augmentation Techniques for Boosting Performance on Text Classification Tasks, EMNLP-IJCNLP.
[2] Text AutoAugment: Learning Compositional Augmentation Policy for Text Classification, EMNLP.
---
Rebuttal 2:
Comment: Dear reviewer,
We are glad that the reviewer appreciates our attempt, and sincerely thank the reviewer for the constructive comments.
As suggested, we have elucidated the distinctions between Lookaround and meta-learning approaches and have conducted an exhaustive comparison of Lookaround with other methods such as Model Soups and Model Ratatouille.
Since two-thirds of the allocated time for reviewer-author discussion has already elapsed, we sincerely look forward to your reevaluation of our work and would very appreciate it if you could raise your score to boost our chance of more exposure to the community. Thank you very much!
Best regards,
The authors of Lookaround
---
Rebuttal Comment 2.1:
Title: Acknowledgement
Comment: Thanks for the detailed responses! I have looked in to the responses and I think that they have addressed my concerns. However, I have to admit that I might have overlooked some performance gap regarding the baselines. Therefore, I will increase my score and lower my confidence given this controversial situation. | Summary: Flatness-aware optimizers have gained significant attention in the field of research for training deep neural networks that are robust. Weight Averaging (WA) is a popular approach to finding solutions within the flat regions of the loss surface. However, previous WA methods have two limitations. First, when WA is performed within a single optimization trajectory after training convergence, resulting in limited functional diversity among the averaged members. Second, when WA is performed across diverse modes, the averaged weights may negatively impact performance. To address these challenges, this paper introduces a new method called Lookaround Optimizer, which aims to overcome these limitations.
Strengths: Originality
- Proposed optimizer seems to be relatively simple and straightforward but still attractive.
- This article presents a theoretical analysis regarding the variance of the steady state.
Clarity
- The paper is written effectively, ensuring high accessibility for readers.
- Methods are simple and easy to follow.
Weaknesses: Experiments
- Regarding the experiments conducted, the authors acknowledged in the Limitation section that their proposed method incurs additional training costs (I think the proposed optimizer requires nearly $d$ times more computational resources when averaging $d$ models). Therefore, for a fair comparison, it would be necessary for other baseline methods to undergo additional training epochs as well. However, it appears that the authors used the same number of training epochs for all the experiments.
- The overall results of the experiments are rather counterintuitive. Particularly in Table 3, the reported accuracy of the Deep Ensemble method being lower than the accuracy of a single solution trained using the proposed method is quite unexpected. Similarly, in Table 1, the reported performance of flatness-aware optimizers such as SWA [1] and SAM [2] being lower than SGDM contradicts the knowledge and expectations of the research community.
References
[1] Pavel Izmailov, D. A. Podoprikhin, Timur Garipov, Dmitry Vetrov, and Andrew Gordon Wilson. Averaging weights leads to wider optima and better generalization. Conference on Uncertainty in Artificial Intelligence, 2018.
[2] Pierre Foret, Ariel Kleiner, Hossein Mobahi, and Behnam Neyshabur. Sharpness-aware minimization for efficiently improving generalization. International Conference on Learning Representations, 2021.
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: Experiments
- The response regarding concerns addressed in the Weakness section will be critical to the final decision.
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 3 good
Contribution: 3 good
Limitations: Limitations are adequately addressed in the Limitation section.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely appreciate your comments on our work. We hope the following response will address your concerns.
**Q1: Lookaround uses $d$ times more computation. How does Lookaround fairly compare to other methods?**
We agree that it is imperative to establish equitable comparisons in computational analysis among different approaches, which is precisely what we've accomplished. Given that the proposed Lookaround utilizes $d$ times the amount of data, we took care that in our primary experiments (corresponding to Table 1 to 4 in the paper), all other methods were also trained using $d$ times the data volume. Specifically, within a single epoch, both the proposed Lookaround and the competitors undergo training on an identical $d$ times the data augmentations. With such a setup, we guarantee consistency in the data volume utilized by each method, thereby ensuring fair comparisons in terms of computation. With the same amount of computation, Lookaround outperforms the competitors by a considerable margin, which demonstrates the superiority of Lookaround in efficiency.
In the limitation section, we mentioned that Lookaround is limited by the additional cost, considering that Lookaround usually needs more training time to converage than its competitors. However, if restricted to the same training buget, Lookaround still outperforms the others even it have not reached its own optimum. We will provide further clarity on this aspect in the revised paper.
**Q2: In Table 3, the reported accuracy of the ensemble method being lower than the accuracy of a single solution trained using the proposed method is quite unexpected.**
Thanks for the comment. The proposed Lookaround, as previously mentioned, achieves remarkably superior performance with a single solution to the preceding ensemble method. In the case of Logit Ensemble, results are obtained by ensembling from *independently* trained models. The performance improvement through ensembling is solely derived from the diversity. On the contrary, Lookaround can be perceived as an ensemble (Recall that weight average approximates straightforward ensemble as proven in SWA) of *simultaneously and interdependently* trained models. During the training process, these models benefit from one another by weight averaging and converge towards flatter minima. The performance gain arises not only from the diversity (each model is trained on different augmentations), but also from the superiorioty of the base models. Hence, it is entirely rational that the proposed Lookaround surpasses simple ensembling methods, which should be considered an advantage rather than a disadvantage of our work.
**Q3: Similarly, in Table 1, the reported performance of flatness-aware optimizers such as SWA and SAM being lower than SGDM contradicts the knowledge and expectations of the research community.**
Thanks for raising these concerns. Here we would like to make the following clarifications.
- For the optimizer SWA, we would like to remind the reviewer that it outperforms SGDM in nearly all our experiments, as evidenced in Table 1, Table 2, and Table 4 in our paper. The only one exception arises in the case of the model VGG19 on CIFAR100, where SWA achieves slightly lower accurcy. These results, however, align quite consistently with the prevailing consensus within the research community.
- For the optimizer SAM, we have carefully checked the experimental configurations and repeated the experiments three times. The results remain consistent across our repeated trials. We speculate that the heavy data augmentation employed in our experiments might be the predominant factor in the inferior performance of SAM. To ensure fair comparisons in computation, both Lookaround and the competitors, including SAM, are evaluated with heavy data augmentations (random horizontal flip+random vertical flip+RandAugment). The heavy data augmentations may serve as a potential distraction for SAM, as both techniques are proposed to enhance generalization. To verify this hypothesis, we conducted a sanity check of our code with only a single type of data augmentaion. The results are provided in Table S1. It can be seen that *SAM significantly ourperforms SGD with only one type of augmentation, which is still consistent with consensus of the research community*.
Table S1: Comparison of different optimization methods for ResNet18 on various datasets. "H" denotes random horizontal flip data augmentation. "R" denotes RandAugment data augmentation.
| | SGDM (H) | SGDM \(R\) | SAM (H) | SAM \(R\) | Lookaround (R+H) | Lookaround+SAM (R+H)|
| :--------------: | :---: | :---: | :---: | :---: | :--------: | :------------: |
| CIFAR100 | 75.72 | 76.14 | 76.84 | 77.08 | 77.20 | 77.55 |
| Flowers102 | 95.93 | 96.54 | 97.01 | 97.35 | 97.65 | 97.75 |
| Stanford cars196 | 86.77 | 87.02 | 87.53 | 87.82 | 89.35 | 89.67 |
Furthermore, the results listed in Table 1 and Table S1 implies a potential merit of the proposed Lookaround: it is orthogonal to SAM and heavy data augmentation, and can work compatibly with both, as validated by *ours* (Lookaround+heavy augmentation) in Table 1, and *Lookaround+SAM* in Table S1. We hope our response can successfully address your concerns and strengthen your confidence in our work.
---
Rebuttal Comment 1.1:
Comment: Dear reviewer,
We extend our sincerest gratitude for the questions you raised during the initial rebuttal stage. We have taken great care in providing thorough answers to address your concerns. As we approach the halfway point of the author-reviewer discussion phase, we are eager to receive further feedback and engage in fruitful discussions with you.
Best regards,
The authors of Lookaround | null | null | null | null |
Metropolis Sampling for Constrained Diffusion Models | Accept (poster) | Summary: The paper presents a rejection-sampling technique applied to diffusion models when sampling defined on manifolds.
The paper contains some theoretical results on the convergence of the proposed algorithm, in particular the assymptotic convergence to the reflected Brownian motion. The also paper contains a few comparisons to the other methods, in particular, methods using regularization (such as barrier functionals) and reflected stochastic processes.
Strengths: The paper applies the notion of rejection sampling to denoising diffusion models.
The proposed algorithm is very simple to modify to code, and it require much less intrucive modifications to the sampling algorithm compared to the alternatives the authors mention.
Weaknesses: - It needs to be clarified what the algorithmic innovation is in the paper. Rejection sampling is a standard technique, and the application to diffusion models directly applies an old idea to a trendy topic. Also, it is not clear how valuable is the theoretical treatment. The authors show that the methodology is asymptotically equivalent to RBM, but then in the experiments, the authors claim that their algorithm outperforms RBM.
- The authors mention that scalability is one of the main advantages of its method. However, it is not clear what the scalability is with respect to. Also, for algorithms (which is the case here), scalability is often required to show how a bigger problem can be split into different pieces that can be run independently so that the overall runtime remains low. In this case, only one experiment showcasing scalability can be found in Section 5.1. The authors seem to claim their method is scalable with respect to the dimension, but still most of the wall-time are not that different. Along the same lines, I am not sure how interesting sampling from a cube (or a polytope) is in this case. I would assume that most of the "mass" of the distribution is along the boundary, which would make the problem easier, as seen in Table 2, where both methods start to do better as d increases.
- Most of the theory seems to be around Manifolds of co-dimension 0, which is only mildly interesting. There is not much theory when the manifold has a positive co-dimension. Also, there is a mismatch between the definitions of $\mathcal{M}$. In Line 72, it is defined as an open subset of another Riemannian manifold $\mathcal{N}$, which is not clearly defined, using several inequalities, and there are the assumptions in Line 207, which greatly simplifies the setup. What are the assumptions that this paper is considering?.
- The manifolds are smooth and of co-dimension 0; however, some of the experiments consider non-smooth manifolds, and others with manifolds of co-dimension greater than 0, thus, there seems to be a disconnect between the theory and experiments. Also, there is no quantitative metric for the sampling quality in section 5.3. They seem to "look" better than the very weak baseline of a uniform sampling, but it is not clear that the sampling does capture the correct statistics.
- In general, the experiments are fairly weak, particularly the baselines. Using a uniform distribution is by no means a baseline, and it should be used as a sniff test. One baseline that the authors could have used is the sample on a sphere using their algorithm (seen the sphere embedded in a 3-dimensional space) and using a diffusion model defined on the sphere. Such a comparison would make the point that it can outperform a method specifically tailored for the geometry. Also, the reference to the figures is not really informative, particularly when compared to weak baselines. I would prefer to have some quantitative information.
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: See above.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 2 fair
Limitations: The authors mentions some of the limitations of their work, and there is not direct adverse impact of this work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for taking the time to review our manuscript and for providing helpful and constructive feedback. We were happy to see your review emphasising the implementational simplicity of our approach, as well as the resuting ease with which it can be integrated into existing sampling algorithms.
The concerns you raised in your review relate to the originality and scalability of our method, our choice of baselines, and the type of manifolds covered by our theoretical justification. We will address each of these points in turn below.
## Algorithmic Innovation
> It needs to be clarified what the algorithmic innovation is in the paper. ... Also, it is not clear how valuable is the theoretical treatment.
Rejection sampling is indeed a standard technique; in this manuscipt we present several contributions. First is exactly as the reviewers notes: an application of this old idea to a trendy topic, and evidence that using this idea in this setting is more effective than existing approaches in the literature. Second is the theoretical treatment we give which is necessary to justify the use of rejection sampling in the diffusion models we present. The asymptotic equivalence establishes the RBM is the continuous time process corresponding to the metropolis discretization; this means we can swap the metropolis discretization in for a reflected walk in reflected diffusion models using the same losses and time-reversal. This is highly non-obvious as the dynamics of the two discretizations are superficially quite different.
> The authors show that the methodology is asymptotically equivalent to RBM, but ... claim that their algorithm outperforms RBM.
Yes, this is because our approach affords much more tractable learning dynamics (see e.g. the discussion in Section 3.1), which enables it to unambiguously outperform the less tractable RBM-based diffusion models across a range of constraint geometries and applications, both in terms of empirical performance and computational efficiency.
## Scalability
> The authors mention that scalability is one of the main advantages of its method. ...
The proposed method is scalable in two senses: first it is much more efficient in general than the existing approaches in the literature, and hence is inherently _more_ scalable than existing approaches; second it is in fact scalable in exactly the way the reviewer notes when the underlying manifold is a product of constrained manifolds, e.g. a product of simplices or a product of SPD matrices with bounded trace. In this case we indeed have that the problem can be broken up into pieces and run independently, allowing straightforward scaling of this more efficient method in this specific regime.
## Empirical Evaluation
> In general, the experiments are fairly weak, ... I would prefer to have some quantitative information.
We apologize for the confusion and would like to emphasize that we benchmark our method against recently developed state-of-the-art constrained diffusion models, providing detailed empirical comparisons with models based on the reflected Brownian motion [1,2] and the log-barrier metric [2].
As neither of these methods extends to manifolds with non-convex bundaries, we currently do not include a baseline for the wildfire example on the sphere. We only provide the uniform distribution as it is a relatively standard practice in diffusion papers to highlight what the invariant distribution looks like, especially for non-standard invariant distributions like the uniform over the constrained set we investigate here. Motivated by your comment, we have run an additional experiment on the sphere to provide a quantitative comparison to an unconstrained model. We just present MMD results here:
| Model | MMD | % in boundary |
| - | - | - |
| Unconstrained | $0.1567 \pm 0.013$ | $63.3%$ |
| Constrained | $0.1388 \pm 0.015 $ | $100%$ |
We compute the RBF-MMD as described in the appendix, removing samples outside the boundary.
## Manifolds covered by our proof
>Most of the theory seems to be around Manifolds of co-dimension 0, which is only mildly interesting. ... What are the assumptions that this paper is considering?.
>
> The manifolds are smooth and of co-dimension 0 ... there seems to be a disconnect between the theory and experiments.
We agree with the reviewer that the theoretical results of Theorem 3 and Theorem 4 are stated for manifolds of co-dimension 0. We believe that the results can be extended to a more general setting. One approach to do so would be to replace the local parameterization of the manifold as $\\{x \in \mathbb{R}^d,\Phi(x) < 0\\}$ by similar inequalities in local charts. Using the paracompact property of the manifolds under consideration it seems possible to extend our results. However, this approach seems to be highly technical and we postpone it to future work.
The definition in line 72 is to provide the broad class of manifolds which our method cover. Our experiments fall in the class of the manifolds defined in line 72. However, we greatly simplify the structure of the manifold to give the theoretical result in line 207. So to summarize:
* The methodology is defined on the class defined in line 72. Experiments are defined w.r.t. manifolds defined in line 72.
* In order to simplify the analysis, the validity of the discretization scheme is only assessed under stronger assumptions (as pointed out by the reviewer).
We thank the reviewer for pushing us to clarify this point and will update the paper accordingly. We leave the extension of our theoretical results to future work (as highlighted above this extension is likely to be highly technical).
---
We hope that our additional clarifications and discussion address all of your questions and concerns.
Please let us know if you have any further questions!
---
## References
[1] Lou and Ermon. "Reflected diffusion models." ICML 2023
[2] Fishman et al. "Diffusion Models for Constrained Domains." TMLR, 2023
---
Rebuttal Comment 1.1:
Title: Thank you for the response.
Comment: - The proposed method is scalable in two senses: ...
It seems that in the response of the authors claim that the are able to reduce the constants in the computational complexity, which does not render the method inherently scalable. As I mentioned before there is a clear definition of scalability for algorithms.
If the claim is scalability, which appears in the introduction when the properties of the method are being presented, then it should be backed up by experiments or at least a small section. In addition, following the argument of scalability of the proposed method when the "manifold is a product of constrained manifolds", this renders the following claim of the paper "Accurately modelling distributions on constrained Riemannian manifolds is a challenging problem with a range of impactful practical applications. In this work, we have proposed a mathematically principled and computationally scalable extension of the existing diffusion model methodology to this setting." misleading. This sentence seems to imply the scalability for any constrained Riemannian manifold, and from the authors' response, it seems to be the case on a subset.
In addition, the authors claims that their method is more "numerically stable" but there is not evidence of this claim in the paper.
- We believe that the results can be extended to a more general setting...
In this case, I would strongly encourage the authors to properly nuance the introduction of the method along with the theoretical contributions. Otherwise, the sentence " Our core theoretical contribution is to show that this new discretisation converges to the reflected SDE by using the invariance principle for SDEs with boundary" is also misleading, given that the theoretical contribution is not with all Riemannian manifolds but only a subset.
---
Reply to Comment 1.1.1:
Comment: Thank you for taking the time to engage with our rebuttal. We were happy to see that our clarifications and additional experimental results addressed your concerns regarding the algorithmic innovation and empirical evaluation of our manuscript. We hope to address your remaining concerns regarding our claims of scalability, numerical stability and the generality of our theoretical justification below.
# 1. Scalability
We agree with the reviewer that we do not present a method that provably achieves the definition of scalability that they are outlining, nor do we currently present empirical evidence justifying the term scalable given that definition. To that end we carried out an additional experiment, running the forward process for a fixed number of discretization steps with a fixed $\beta$. The forward process is the only difference between the reflected and Metropolis diffusion models, so it is what explains the difference in runtime. We used the hypercube because it is easy to check for convergence of the random walk (by binning the hypercube with hyperrectangles and computing the relative vs. expected frequency). We tuned $\beta$ so that the discretized walk approximately converged for both the reflected and Metropolis random walks in the last steps of the walk. We then used the `timeit` utility to compute the following means and standard deviations for running the reflected and Metropolis random walks.
| Dimension | Reflected Mean ± Std. Dev | Metropolis Mean ± Std. Dev | Ratio of Reflected / Metropolis |
|:---:|:---:|:---:|:---:|
|2|0.080±0.000|0.020±0.000|3.95713|
|4|0.141±0.001|0.036±0.001|3.89396|
|8|0.267±0.001|0.069±0.000|3.85959|
|16|0.576±0.023|0.130±0.001|4.43903|
|32|1.405±0.027|0.205±0.001|6.84785|
|64|3.238±0.040|0.300±0.003|10.7962|
|128|9.148±0.075|0.491±0.009|18.6326|
|256|27.600±0.434|1.004±0.013|27.4842|
As a preliminary analysis, we have fit this data with the symbolic regression package `Pysr`, yielding the following formulas for the Reflected `((0.0002893396 * d) + 0.03379444) * d` and Metropolis `0.003400629 * ((log(d) * 6.0751796) + d)` curves. These would correspond to complexities of $\mathcal{O}(d^2)$ and $\mathcal{O}(d + \log d)=\mathcal{O}(d)$ respectively, though we point out that they are only approximate empirical indicators of scaling beaviour.
We hope that these results are convincing in justifying our use of the term "scalable". It is possible that these numbers would change by a small amount with a more careful tuning procedure for $\beta$, but expect the clear trend we observe to hold up. We will produce a more comprehensive table using an automated tuning procedure for even higher dimensions and other polytopes and additionally ensure that our claims of scalability are appropriately contextualized in light of these experimental results and your comments. We will also add a discussion of the product structure we describe above and the benefits we can expect in that regime, even though, as the reviewer is correct to note, it is not central to our claims of scalability.
# 2. Numerical Stability
The Metropolis method is exactly as numerically stable as a standard diffusion model on a Riemannian manifold since the constraints are handled by the binary constraint function. This is in stark contrast to the baselines we compare against, which all suffer from serious numerical instability issues:
* The numerical instabilities of the log-barrier approach are so pathological they make it impossible to run the ODE evaluation, producing extreme values and NaNs.
* While the numerical issues of the reflected method are less extreme, we found the operation of reflecting off the boundary to induce hard-to-resolve numerical instabilities even in the case of a polytope (never mind more complex geometries), which necessitated extensive hyperparameter tuning to resolve.
We provide a brief discussion of these issues in Appendix C, but agree with the reviewer that we do not clearly articulate the numerical issues of the existing approaches. We will mention these issues explicitly in our discussion of the limitations of existing constrained diffusion models in Section 3.1 and add a more thorough discussion to Appendix C.
# 3. Gaps Between Theory and Applications
Thank you, we agree that a more nuanced presentation of our theoretical contributions is necessary to ensure the theory matches our claims. Below we outline the steps necessary to generalize the results; we will update the manuscript along the lines of the outline and make sure to adjust the respective sections of the Abstract and Introduction accordingly.
We appreciate the thoroughness of your review and hope that these additional clarifications and experimental results address your remaining concerns. Please let us know if you have any further questions.
Sincerely,
The Authors
Title: Additional Experiments and Clarification | Summary: The paper proposes a new discretization of the forward and backward SDEs used for learning and sampling in the diffusion generative model framework in manifolds with inequality constraints. The method relies on "metropolising" a given discretisation of the "unconstrained" manifold (either Euler-Maruyama if the manifold is $\mathbb{R}^d$ or Geodesic random walks if it's not). The discretisation is shown to converge weakly to the reflected SDE and is shown to be numerically simpler to implement and faster for diffusion models on manifolds.
Strengths: The paper proposes a simple approach for sampling and training diffusion models on constrained manifolds. The approach is theoretically justified and is shown to be numerically more efficient than the state of the art. Thus, the proposed algorithm is an interesting tool for practitioners. The paper is mostly well written and accounts well for the state of the art. Several examples with increasing degree of complexity are shown.
Weaknesses: The paper possesses some shortcomings in the presentation, namely some typos that I will refer to in the bottom part of this section.
There are also some fundamental things missing. For example, I could not find the code, since it's not given in the appendix and the link to the anonymous github was inactive when I tried to access it. This is a problem, specially when for example the paper do not explicitly tells how it calculates the log likelihood of the test samples, thus I'm admitting it uses the ODE formulation as presented for example in [1].
I also feel that some other metric would also be of great value. The paper uses only the log likelihood of the held out test set as a metric, but some other distribution related metric such as the sliced wasserstein can be applied in the first example, specially in the lower dimensional examples.
Some typos:
Section 3.1 Practical imitations → Pratical limitations
Proposition 5 repeat the definition of Z
[1] De Bortoli, V., Mathieu, E., Hutchinson, M., Thornton, J., Teh, Y. W., & Doucet, A. (2022). Riemannian score-based generative modelling. Advances in Neural Information Processing Systems, 35, 2406-2422.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: I feel that a lot of the implementation could be better explained, even though the algorithm is simple. The code should be made available and I would also like the authors to be more precise into how do they calculate the log likelihood of the test set, even though this can be considered somehow standard in the literature.
How the epochs were chosen for the log likelihood calculations? It'd be interesting to have a graph of computing time vs log-likelihood of the held out test set to understand how those behave in time for several dimensions for each algorithm (metropolised vs reflected).
The geospatial data example seems to show that the learned distribution is « coarser » than the target distribution and has some concentration around the boundaries. Is this something to expect? Maybe rejecting a lot of steps in the more non convex parts of boundaries lead to an augmentation of the likelihood?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: None
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for taking the time to review our manuscript and for providing helpful and constructive feedback.
We were happy to see your review emphasising the improved computational efficiency and empirical performance of our approach and acknowledging its usefulness to a range of practitioners.
The concerns you raised in your review relate to the availability of our code, the log-likelihood evaluation and the use of an additional performance metric. We will address each of these points in turn below.
---
## Code Availability
> For example, I could not find the code, since it's not given in the appendix and the link to the anonymous github was inactive when I tried to access it. This is a problem, specially when for example the paper do not explicitly tells how it calculates the log likelihood of the test samples, thus I'm admitting it uses the ODE formulation as presented for example in [1].
>
> The code should be made available and I would also like the authors to be more precise into how do they calculate the log likelihood of the test set, even though this can be considered somehow standard in the literature.
We have checked the anonymised links on multiple different devices and they seem to work correctly, as they did when we originally added them to the manuscript. If the code anonymisation site was down at any point during the review period, we apologise for the inconvenience. Hopefully you can access them now.
## Additional Performance Metric
> The paper uses only the log likelihood of the held out test set as a metric, but some other distribution related metric such as the sliced wasserstein can be applied in the first example, specially in the lower dimensional examples.
We would like to point out that we present further evaluations in Appendix F.1 using the MMD distance, as well as a comparison to an additional baseline method presented in [1]. If you would like us to additionally compare the models using the sliced Wasserstein distance, we are happy to do so during the author discussion period. The agreement between the MMD and the log-likelihood seems to signal the log-likelihood is a reasonable metric.
## Log-likelihood Evaluation
> I would also like the authors to be more precise into how do they calculate the log likelihood of the test set, even though this can be considered somehow standard in the literature.
We will add details of how we compute the likelihood in the main paper, but it is indeed as an ODE in the reviewers reference. More precisely, we compute the log-likelihood using the equivalence between diffusion models and continuous normalizing flows (see [2,3] for instance). In the case of reflected diffusion models this equivalence was first used in [4]. As in continuous normalizing flow approaches, the log-likelihood computation is performed leveraging tools from Neural ODE [5].
> How the epochs were chosen for the log likelihood calculations? It'd be interesting to have a graph of computing time vs log-likelihood of the held out test set to understand how those behave in time for several dimensions for each algorithm (metropolised vs reflected).
We fixed the number of optimization steps for the two methods at a point when the optimization had converged and used the same end point for both. This choice is somewhat arbitrary, and we can easily add a plot showing how the log-likelihood evolves over the course of training during the discussion period if the reviewer would like.
> The geospatial data example seems to show that the learned distribution is « coarser » than the target distribution and has some concentration around the boundaries. Is this something to expect? Maybe rejecting a lot of steps in the more non convex parts of boundaries lead to an augmentation of the likelihood?
We agree that there may be some concentration around the non-convex parts of the boundaries. This is particularly interesting given the clear convergence of the forward process to the invariant distribution. We think it is likely that it is caused not by the Metropolis discretization but by the added complexity the score network must learn to "jump" out of the non-convex parts of the constrained set. It seems plausible that more careful architecture engineering could fix this problem, but we think the example in the manuscript serves as a proof of concept for the method.
---
We hope that our additional clarifications and discussion address all of your questions and concerns.
Please let us know if you have any further questions!
## References
[1] Fishman et al. "Diffusion Models for Constrained Domains." TMLR, 2023
[2] Song et al. "Score-Based Generative Modeling through Stochastic Differential Equations", 2021
[3] Huang et al. "A Variational Perspective on Diffusion-Based Generative Models and Score Matching", 2021
[4] Lou and Ermon. "Reflected Diffusion Models", 2023
[5] Chen et al. "Neural Ordinary Differential Equations", 2018
---
Rebuttal Comment 1.1:
Comment: I thank the authors for the response to my questions. I do think the MMD is a better suited metric. I have also checked the link and it seems to be working now. Therefore I'm raising my score. | Summary: This paper proposes a Metropolis sampling for constraint diffusion in the context of generative modeling. The authors show that the proposed algorithm is a discretization of reflected Brownian motion on manifold (with rejection). The authors also apply the proposed algorithm to several synthetic and real data sets.
Strengths: This paper considers a Metropolis sampling algorithm on manifolds in light of the recent development of denoising diffusion models in generative modeling. The paper is well written, and I enjoyed reading it.
Weaknesses: There are several weakness.
(1) As the authors pointed out themselves, the idea of (Metropolis) sampling on manifolds is much related to ball walk. There have been extensive work in this direction, see e.g. Dwivedi, Chen, Wainwright, and Yu's paper Log-concave sampling: Metropolis-Hastings algorithms are fast, JMLR, 2018. The authors failed to make connections to recent developments (and literature).
(2) In the MCMC literature, there are two general "discretization' techniques: ULA (unadjusted Langevein Algo) and MALA (Metropolis adjusted Langevin Algo). The proposed algorithm is close to the idea of MALA, which the authors failed to make connections.
(3) The main theorems (Theorem 2 & 4) are quite standard to experts in reflected Brownian motion. This generally from follows the techniques developed by Burdzy, Ramanan, Williams..., in the light of Stroock-Varadhan. Only Proposition 5 seems to be new (but not hard).
(4) One of the most important aspects of MCMC sampling is the mixing time (or convergence rate). This paper does not seem to provide any related analysis. Though on the continuous level this can be read from existing results, the mixing time in the discrete algorithm is important (especially concerning the choice of the step size $\gamma$).
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: See weaknesses for my comments on the paper.
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 2 fair
Presentation: 3 good
Contribution: 3 good
Limitations: NA
Flag For Ethics Review: ['No ethics review needed.']
Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for taking the time to review our manuscript and for the helpful and constructive feedback. We were happy to see you emphasize that we extend methods for generative modeling on constrained Riemannian manifolds [1,2] and are glad you enjoyed the paper.
The concerns you raised relate to connections to existing prior art and an analysis of the mixing time of our algorithm. We will address each of these points in turn below.
## Prior Literature
### 1
We agree with the reviewer that the proposed forward process is linked with the ball walk (and related algorithms). We do not claim that our algorithm is superior for sampling from an unnormalized density defined on the constrained manifold. We are aware that there is a strong line of works dealing this problem (see e.g. [10] exhibiting strong theoretical guarantees). One of the contributions of our paper is to showcase that we can use these methods to derive efficient numerical schemes for generative modeling under constraints. To the best of our knowledge, the ball walk and derived algorithms have not been applied to define forward noising process for diffusion model algorithms. Nevertheless, we will extend our discussion of related work with [3,7,8,9] and the references therein.
### 2
We will add a paragraph making the connection with MALA and the distinction with ULA. However, our point of view is somewhat different than the usual motivation behind MALA, which, in the unconstrained setting, acts as a way to remove the bias from ULA, see Besag's comment: "if instead one uses s' merely as a Hastings proposal for the next state, then the usual acceptance probability ensures that p is an exact stationary distribution of the modified Markov chain" in [4]. Our proposal can be recast in this context but our motivation is different since we aim at a computationally efficient discretisation of the forward noising of the reflected diffusion.
### 3
This is an important point. We are not aware of any existing proof of Theorem 2 and Theorem 4. We would be grateful if the reviewer could point us towards such a reference. We agree that the proof is not difficult and that our work is an application of the techniques of Stroock and Varadhan to prove the sub-martingale property. However, we would like to make the following comments:
* If the proof does not exist in the literature, we believe it is valuable for the community (and especially newcomers to the field, as not everyone in ML is familiar with reflected processes). We strongly believe that making such results available to the community will help to foster the interaction between the ML community and the MCMC community.
* While the proof is technical (and again we agree that our aim is to verify the conditions of [3], and not introduce any novel techniques to show that a numerical scheme is a valid discretisation), the main difficulty is to obtain the lower bounds in Proposition 5 and to control the behavior of the drift near the boundary (see for example Proposition 20). To do so, we needed Theorem 7 which ensures that, provided enough regularity on the boundary, one can obtain such controls. To the best of our knowledge, this approach is new.
We would appreciate if the reviewer could point us to a reference proving Theorems 2 and 4. Regarding Prop 5, we do not believe a result must be hard to serve as the theoretical foundation for a novel algorithm exhibiting state-of-the-art empirical performance.
## 4 Mixing Time
We want to emphasize that our paper is not an MCMC paper. The only target we consider for the forward noising process is the uniform distribution on the constrained manifold under consideration. In practice, we set the parameter $T$ (running time of the forward noising process) so that, at time $T$, the distribution of the forward process is close to the uniform. While the parameter $T$ is important, it is easily tunable and in all of our experiments we observe fast mixing of the Markov chain. We agree that it would be valuable to obtain 1) quantitative non-asymptotic result regarding the convergence of the proposed scheme (note: we only target the uniform distribution) 2) bounds on the bias incur by the choice of the stepsize.
However, in practice the mixing time and the discretization stepsize are not bottlenecks in performance of the algorithm. As shown in [5,6], the main source of error in denoising diffusion models (in the unconstrained setting) is the approximation of the score. This is because the convergence to the invariant distribution is exponential (with respect to the total variation distance for instance) and the bias incurred by the discretization stepsize is dominated by the neural approximation of the score.
---
We hope that our additional clarifications and discussion address your questions and concerns.
Please let us know if you have any further questions!
## References
[1] Lou and Ermon. "Reflected diffusion models." ICML 2023
[2] Fishman, et al. "Diffusion Models for Constrained Domains." TMLR, 2023
[3] Dwivedi, Chen, Wainwright, and Yu's Log-concave sampling: Metropolis-Hastings algorithms are fast, JMLR, 2018
[4] J. Besag (1994) -- "Comments on "Representations of knowledge in complex systems"
[5] Sitan Chen, Sinho Chewi, Jerry Li, Yuanzhi Li, Adil Salim, Anru R. Zhang -- Sampling is as easy as learning the score: theory for diffusion models with minimal data assumptions (2022)
[6] De Bortoli -- Convergence of denoising diffusion models under the manifold hypothesis (2022)
[7] Vempala -- Geometric random walks: a survey
[8] Cousins, Vempala -- Bypassing KLS: Gaussian Cooling and an $O(n^3)$ Volume Algorithm (2015)
[9] L. Lovasz and M. Simonovits. Random walks in a convex body and an improved volume algorithm. (1993)
[10] Kook, Lee, Shen, Vempala -- Condition-number-independent convergence rate of Riemannian Hamiltonian Monte Carlo with numerical integrators (2022)
---
Rebuttal Comment 1.1:
Title: Extending theoretical results to more general manifolds
Comment: We also want to note that in our response to Reviewer vCsC we additionally outline how to extend our results to a more general class of manifolds, which may be relevant to the reviewer's evaluation here. | Summary: This paper introduces a new approach for generative modelling on constrained Riemannian manifolds, building upon diffusion models. The proposed method implements a Metropolis sampling scheme and offers computational efficiency and improved empirical performance over previous models, particularly with increased constraint complexity. It provides a valid discretisation of the reflected Brownian motion and demonstrates its scalability and flexibility across several application domains.
Strengths: The paper has good originality of incorporating MCMC type of sampling with diffusion model on constrained Riemannian manifolds.
The overall quality of the paper is good, the proposed method should be of interests to a group of researchers working on diffusion models.
Weaknesses: The paper is a bit hard to follow, it would be helpful to further refine the presentation of the methodology sections. Specifically, it would be nice to have a diagram illustrating the relationship between diffusion model, Metropolis sampler, and constrained Riemannian manifolds.
In my view, some of the mathematics in section 2 can be moved to supplementary, too much symbols in this section makes readers to spend time on each definition and formula, and hard to understand the whole picture and the relationship between different components.
Some illustrations might need further refinement, for example, Figure 5 (b) and (c) it is very hard to tell the difference between them and it conveys little information.
Numerical comparison with existing methods and evaluation metrics are relatively limited, the proposed method is mainly compared with the reflected method. Limited metric, does log likelihood comprehensively reflect the main performance improvement for such different datasets (robotics and proteins) Especially the proteins dataset, the changes is relative small, i.e., from 15.2 to 15.33.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: MCMC types of approaches sometimes suffer from long 'burn-in', I am wondering whether the authors have consider this case and evaluate its impact to the performance of the proposed method
Is it possible that Metropolis will give some samples that are not good enough (not representative of the latent space) for diffusion model, and mislead the training, - as samples accepted by Metropolis doesn't guarantee they are good samples.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: Limitations are stated, e.g., potential poor performance on certain constraint geometries. The future work sounds sensible.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for taking the time to review our manuscript and for providing helpful and constructive feedback.
We were happy to see your review emphasising the improved computational efficiency and empirical performance of our approach, as well as acknowledging the quality of the paper and its relevance to other researchers in this space.
The concerns you raised in your review relate to the presentational clarity and empirical evaluation of the manuscript. We will address each of these points in turn below.
## Empirical Evaluation
> Numerical comparison with existing methods and evaluation metrics are relatively limited, the proposed method is mainly compared with the reflected method. Limited metric, does log likelihood comprehensively reflect the main performance improvement for such different datasets (robotics and proteins) Especially the proteins dataset, the changes is relative small, i.e., from 15.2 to 15.33.
We provide both additional comparisons to the Log-Barrier method proposed in [4] as well as comparisons using the MMD instead of the log-likelihood in Section F.1 of the Appendix, to which we refer at the beginning of Section 5. The agreement between the MMD and the log-likelihood signals the log-likelihood is a reasonable metric. It is true that for the proteins dataset the improvement is small; this is corroborated by both the visualizations and the MMD. The time to train (as well as the sampling time) is still a factor of 10 faster though, which is a crucial advantage for scaling this approach up to larger proteins.
## Presentational Clarity
> In my view, some of the mathematics in section 2 can be moved to supplementary, too much symbols in this section makes readers to spend time on each definition and formula, and hard to understand the whole picture and the relationship between different components.
While we agree that shortening the outline of the proof would make the paper more accessible to applications-driven researchers, we believe that the theory presented in Section 3.2 is one of the central contributions of the manuscript is, which is why we chose to include it. We believe the proof is interesting and useful for machine learning theorists working on non-standard diffusion models.
> Some illustrations might need further refinement, for example, Figure 5 (b) and (c) it is very hard to tell the difference between them and it conveys little information.
The difference between Figures 5b and 5c is hard to discern because the models fit the data distribution equally well - though, as mentioned in the previous section, our approach fits it slightly better and is 10 times faster. We believe that this makes any qualitative visual comparison difficult, and would like to mention that Figures 4 and 5 are mainly included to provide an intuitive visualisation of the data, learned and uniform distribution on constraint geometries that may not be well-known in the manuscript's intended target audience.
> Specifically, it would be nice to have a diagram illustrating the relationship between diffusion model, Metropolis sampler, and constrained Riemannian manifolds.
We agree that such a diagram would make the connection between Figure 1 and the methodology outlined in Sections 2 and 3 more apparent. We have added a figure to the appendix to clarify how these components are related and will refer to it in the main text.
## Questions
> MCMC types of approaches sometimes suffer from long 'burn-in', I am wondering whether the authors have consider this case and evaluate its impact to the performance of the proposed method
We want to emphasize that our paper is not a MCMC paper. We are only marginally concerned with the mixing time of the algorithm as our target is always the uniform distribution for the forward noising process. We agree that it would be valuable to obtain 1) quantitative non-asymptotic result regarding the convergence of the proposed scheme (however, we again emphasize that we only target the uniform distribution) 2) bounds on the bias incurred by the choice of the stepsize.
However, in practice the mixing time and the discretization stepsize are not the bottlenecks of the performance of the algorithm. As shown in [1,2], the main source of error in denoising diffusion models (in the unconstrained setting) is the approximation of the network. This is because the convergence to the invariant distribution is exponential (with respect to the total variation distance for instance) and the bias incurred by the discretization stepsize is largely dominated by the neural network approximation of the score.
> Is it possible that Metropolis will give some samples that are not good enough (not representative of the latent space) for diffusion model, and mislead the training, - as samples accepted by Metropolis doesn't guarantee they are good samples.
This is a valid point that may warrant further exploration. However, we think that this is unlikely, as our central theoretical contribution shows that the Metropolis approach is a valid discretisation of the reflected Brownian motion, which was demonstrated to produce high-quality samples on the constrained geometries we consider [3,4].
---
We hope that our additional clarifications and proposed changes to the layout of the paper address all of your questions and concerns.
Please let us know if you have any further questions!
---
## References
[1] Sitan Chen, Sinho Chewi, Jerry Li, Yuanzhi Li, Adil Salim, Anru R. Zhang – Sampling is as easy as learning the score: theory for diffusion models with minimal data assumptions (2022)
[2] De Bortoli – Convergence of denoising diffusion models under the manifold hypothesis (2022)
[3] Lou and Ermon. "Reflected diffusion models." ICML 2023
[4] Fishman, et al. "Diffusion Models for Constrained Domains." TMLR, 2023
---
Rebuttal Comment 1.1:
Comment: Thanks for the author's response. The answers indeed make it a bit more clear for me to understand the paper, especially the Uniform distribution one. Will keep the rating to borderline accept if the author could follow the response to improve the overall presentation of the methodology. | Rebuttal 1:
Rebuttal: # General Response
We would like to thank all reviewers for the time and effort they have put into reviewing our manuscript and for the valuable and constructive feedback they have provided.
---
In our manuscript, we present a new method for generative modelling on constrained Riemannian manifolds that affords substantial gains in computational efficiency and empirical performance compared to the current state of the art [1, 2].
We are delighted to see that reviewers appreciated the practical significance of our method across a range of settings, highlighting that it is both "of interest to a group of researchers working on diffusion models" (**reviewer u5aU**) and an "interesting tool for practitioners" (**reviewer a8R9**).
We are also happy to see that reviewers appreciated the "good originality" (**reviewer u5aU**) and "theoretical justification" (**reviewer a8R9**) of our approach, noting that "the proposed algorithm is very simple to modify [and] code" (**reviewer vCsC**), "requires much less intrusive modifications to the sampling algorithm" (**reviewer vCsC**) and is "numerically more efficient than the state of the art" (**reviewer a8R9**).
Additionally, we are pleased that reviewers found the paper to be "well-written" (**reviewers fvg9 and a8R9**) and that they "enjoyed reading it" (**reviewer fvg9**).
---
The reviewers have also raised a range of points regarding different parts of the paper. To address these, we have responded with additional discussions and clarifications of our work under the respective reviews. We have also added a baseline model to the paper, which further shows the strength of the proposed method (discussed in response to **reviewer vCsC**). We hope that these remarks convincingly address the reviewers' concerns and are happy to answer any further questions.
Sincerely,
The Authors
---
## References
[1] Lou and Ermon. "Reflected diffusion models." ICML 2023
[2] Fishman et al. "Diffusion Models for Constrained Domains." TMLR, 2023 | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Generalized test utilities for long-tail performance in extreme multi-label classification | Accept (poster) | Summary: This paper focuses on the long-tail problem in extreme multi-label classification. To address this problem, the authors propose a new method to optimize performance metrics for extreme multi-label tasks via the expected test utility (ETU) framework. Experimental and theoretical results are also provided.
Strengths: 1. The long-tail problem in the extreme multi-label classification task is important. It is good to explore how to fairly evaluate the performance of multi-label learning algorithms on tail labels.
2. The proposal is technically sound. Directly optimizing performance measures for tail labels is make sense.
Weaknesses: The main concern about the paper is the experiments. There are also some other novel algorithms that are proposed for the long-tail problem in multi-label classification tasks and can achieve good performance, but the authors did not compare with these studies in the experiments.
Moreover, there are some minor issues, such as the related works should be discussed.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: 1. There are also some other works that focus on the long-tailed multi-label classification tasks, instead of directly optimizing the performance metrics, how about the performance of these works?
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely thank the reviewer for the review and questions. We appreciate the hard work. Below we address the main question.
In contrast to many works in extreme classification, which are mainly algorithmic and aimed to obtain "better" performance on tail-labels with existing metrics, our work mainly focuses on the performance metric itself. Through Table 1 of the paper, we argue that the prevalent instance-based measures, such as P@k and PSP@k, used by the community, are largely insensitive to the removal of a large fraction of tail-labels. We speculate that the novel algorithms mentioned by the reviewer would not help much in this regard, as the existing metrics are mainly focusing on head labels.
Furthermore, we would like to highlight the comments by reviewer 4FGQ : "It is well known that the existing metrics for XMC problems cannot measure the performance on tail labels because of the data distribution. I agree with the authors that the widely used propensity scores are heuristics that are not to be used as metrics, especially it requires hyper-parameters which would introduce much consistency issues when used for comparison. The authors express the metrics in the ETU framework, thus unifying the existing and potential metrics." In other words, a direct algorithm-to-algorithm comparison on existing metrics would be misleading. The comparison on the budgeted@k macro-measure could also be challenging as the mentioned algorithms would need to be tailored for predicting top-$k$ labels.
In our work we are (i) investigating some shortcomings of the existing metrics, (ii) proposing alternatives based on macro-measures for budgeted@k predictions, and (iii) introducing algorithms to optimize them based on a plug-in approach assuming access to an estimator for conditional label probabilities $\eta$. The algorithms mentioned by the reviewer might be useful in improving the estimates of $\eta$s. This is an interesting question, but beyond the scope of this paper. Nevertheless, we would appreciate if the reviewer would explicitly say which algorithms she/he has in mind. | Summary: This paper analyzes generalized metrics budgeted “at k” by formulating it in the expected test utility (ETU) framework. They derive optimal prediction rules and construct their computationally efficient approximations with provable regret guarantees and being robust against model misspecification.
Strengths: This paper is sound and clearly written.
Weaknesses: Limited contribution:
The paper investigates optimal solutions for the class of utility functions that can be linearly decomposed over labels into binary utilities. Is it possible to extend this method to a more broad class of utility functions.
The algorithm block coordinate ascent looks not new to me.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Table 1 shows that the macro measures are suitable for long tails experimentally. Would you please explain the reason for it in theory?
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely thank the reviewer for the review and questions. We appreciate the hard work. Below we address the main concerns.
**Regarding the extensibility of the proposed approach**
We first want to stress that the class of linearly decomposable into labels functions studied in this paper is already quite general, including standard instance-wise measures (at k) like precision, Hamming loss, weighted instance-wise losses such as PSP, as well as macro measures with arbitrary binary loss function as basis (ref. Table). Further, linear combinations of these utilities are permitted (see our answer to Reviewer ELgR).
The first challenge in extensibility is that a crucial feature of the investigated loss functions is that they can be optimized using only marginal label probabilities $\eta_i(x) = P[Y_i=1 | X]$. This is important for the plug-in approach, as estimating the full joint distribution of labels is intractable. This still leaves some design space for generalizations, foremost probably replacing the arithmetic mean in the macro-average with a more general aggregation function. We haven't investigated these options, but if the reviewer has a concrete example of a loss function in mind, we would be happy to discuss it.
---
> *Table 1 shows that the macro measures are suitable for long tails experimentally. Would you please explain the reason for it in theory?*
Please note that we do not claim that *all* macro-averages are long-tail metrics, e.g., macro-average Hamming loss is equivalent to instance-wise Hamming loss, and thus not more tail-adapted. The main claim of Table 1 is that typical instance-wise loss functions are almost invariant under the removal of a large portion of tail labels, whereas there exist macro metrics that are sensitive to this operation. There is no "first principles" theoretical explanation for why macro-averages are good for measuring tail-performance. We can give a partial explanation as follows:
Assume that the binary utility over which we take the macro-average has range [0, 1], and it is zero if there is no true positive in the prediction. If we have m labels in total, of which m' are tail labels, then removing the entire tail means that the binary utility on all these m' labels is zero, and the overall macro-average is upper-bounded by 1 - m'/m. This is not true for popular instance-wise measures such as Precision@k.
---
Rebuttal Comment 1.1:
Comment: Thanks for your clarifications. I keep my score unchanged | Summary: In this paper, the authors studies the evaluation metrics for the long tailed extreme multilabel classification problems. Compared with existing heuristics, such as PSP, they formulate the metrics in the expected test utility framework. Inference rules are derived to obtain optimal metrics. Approximations are construct for the metrics that are difficult to compute.
Strengths: This paper picks an interesting but not well studied direction and gives thorough study with reliable derivations and proofs. It is well known that the existing metrics for XMC problems cannot measure the performance on tail labels because of the data distribution. I agree with the authors that the widely used proensity scores are heuristics that are not to be used as metrics, especially it requires hyper-parameters which would introduce much consistency issues when used for comparison. The authors express the metrics in the ETU framework thus unifying the existing and potential metrics.
The paper gives detailed derivations for the optimal prediction rules for each of the derived metrics and provides cheap and easily applicable approximations for metrics that are hard to compute. Empirical results on 4 public XMC benchmark datasets indicates that the optimal rules can actually obtain the best score for the corresponding metrics.
Weaknesses: The paper is overall well presented but some of the terms are used without introducing/explaining, such as "macro-average".
Minor:
* Line 82: $C_{11}$ to true positives
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Inference with a global budget k is easy to implement but not always optimal in terms of utility-budget trade-off. If there's no top-k constraint, how can we design the metrics/inference rules to determine the number of predictions for each instance?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: It would be great to see the complexity analysis of the optimal prediction rules to understand the inference overhead.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely thank the reviewer for the review and questions. We appreciate the hard work. Below we address the main questions.
> *Inference with a global budget k is easy to implement but not always optimal in terms of utility-budget trade-off. If there's no top-k constraint, how can we design the metrics/inference rules to determine the number of predictions for each instance?*
We are not entirely sure what the reviewer is asking for. We would appreciate additional clarification if possible.
We agree that the global budget of $k \times n$ labels, with $n$ being the number of test examples, could be an alternative definition of the problem. However, it would be suited to different applications than our formulation. In such a case, the optimization procedure would determine the number of predicted labels per instance. Let us also notice that by using unbudgeted metrics like Hamming loss or the standard macro-averaged F-measure one naturally obtains a varying number of predictions per instance.
---
> *It would be great to see the complexity analysis of the optimal prediction rules to understand the inference overhead.*
We discuss the complexity in Section 6 of our paper. One iteration of the proposed block coordinate algorithm, assuming we have a non-sparse matrix of estimates of $\eta$, has time complexity of $O(n(m \log k))$ and space complexity of $O(nm).
This can be improved by short-listing using a sparse matrix of $\eta$s with only $k’$ non-zero elements per row, resulting in $O(n(k ′ \log k))$ time and $O(n(k ′ + k))$ space complexity. The important question is then how many iterations the algorithm runs before terminating. We also answer this question in response to Reviewer 4ZEa. For metrics taking values in a bounded range and $\epsilon > 0$, the algorithm can only make up to $O(1/\epsilon)$ steps in the `while’ loop, as each step must decrease the objective by at least $\epsilon$. However, even when $\epsilon=0$, the algorithm terminates after a finite number of steps.
In practice, despite small $\epsilon$, the algorithm usually terminates after a few iterations (in the worst observed cases, it was a few tens of iterations). In Table 1 in the PDF attached to the global response, we also present the number of iterations and CPU time that was used for our inference procedures.
---
Rebuttal Comment 1.1:
Comment: Thanks for the author's response. I will keep my score unchanged. | Summary: This paper critiques that the existing extreme classification evaluation metrics don't give the complete picture with respect to all labels (more specifically performance on head labels overpowers performance on tail labels), hence it recommends using macro-averaged metrics which are more favorable to tail labels. The authors use expected test utility framework to come up with BCA based methods to get modified prediction rules given a prediction matrix from a trained XMC model. Experiments on benchmark datasets indicate that just taking the top-k predictions is not ideal for marco averaged metrics and the proposed approach performs significantly better.
Strengths: - The proposed approach works as a plugin method that can be used with any existing XMC method without any need for retraining
- Proposed approach is well reasoned
- Approximate versions scalable to large datasets without the need for specialized hardware
Weaknesses: - Not sure how important are macro measures for evaluating performance on XMC datasets as the data imbalance is inherent to XMC applications and in most scenarios we do want to mimic this imbalance in model predictions as well (for e.g. in ad recommendation problem with a billion ads in corpus, certain products are popular, so giving equal weightage to each product during evaluation might not be very useful). I agree that we do want to improve quality of predictions on tail labels as well but not at the cost of making worse overall predictions. Results indicate that trying to do well on macro-averaged metrics heavily impacts the performance on standard metrics which is not desirable.
- Section 4 is a bit hard to follow
Minor typo:
line 82, "c11 to true negatives." -> "c11 to true positives."
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: is it possible to optimize for a weighted combination of standard precision and macro precision, so that we can smoothly interpolate between these two metrics?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 2 fair
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely thank the reviewer for the review and questions. We appreciate the hard work. Below we address the main questions:
**Regarding the importance of macro measures for evaluating performance on XMC datasets as the data imbalance and a trade-off between performance on standard metrics and macro-averaged metrics.**
We think it is a serious problem if a model only manages to get *at least one* true positive on about 25% of all labels (as suggested by the coverage results in Table 2). Ideally, we would like to improve the performance on tail labels without sacrificing too much performance on head labels. This naturally leads to the question the reviewer has posed whether it is possible to interpolate between the instance and macro metrics to find a better trade-off. The answer is yes, and there are actually two ways this can be achieved.
The first is a straight-forward interpolation, as instance-wise precision-at-k is covered by our framework, and our class of utility functions is closed under linear combinations. Such an objective can be optimized by the proposed block-coordinate algorithm without any modification. In the PDF attached to the general response, we present plots with a result of the optimization of two following objectives with different values of $\alpha$:
$\Psi(\mathbf{Y}, \mathbf{\hat Y}) = (1 - \alpha) \times \text{Instance-P}@k(\mathbf{Y}, \mathbf{\hat Y}) + \alpha \times \text{Macro-P}@k(\mathbf{Y}, \mathbf{\hat Y})$
and
$\Psi(\mathbf{Y}, \mathbf{\hat Y}) = (1 - \alpha) \times \text{Instance-P}@k(\mathbf{Y}, \mathbf{\hat Y}) + \alpha \times \text{Macro-F1}@k(\mathbf{Y}, \mathbf{\hat Y})$.
The plots show that the instance-vs-macro curve has a nice concave shape that dominates simple baselines. In particular, we can initially improve macro-measures significantly with only a minor drop in instance-measures, and only if we want to optimize even more strongly for macro-measures we do get larger drops in instance-wise measures. A particularly notable feature of the plug-in approach is that the curves in the figure are cheap to produce since there is no requirement for expensive re-training of the entire architecture, so one can easily select an optimal interpolation constant according to some criteria, such as a maximum decrease of instance-wise performance.
Second, we note that our framework admits using different binary measures for each label. The most simple way to exploit this is to use a weighted macro-average, giving more weight to head labels. We have actually employed this in the paper, in the sense that instance-wise precision-at-k is just macro-precision-at-k with a 1/prior weighting for the different labels. Parametrizing the exponent $(1/\text{prior}^\beta)$ allows for a different way of interpolation to standard macro-averages, which are realized by beta=0. The results for beta=1/2 are presented in Table 2.
---
> *Section 4 is a bit hard to follow*
We thank for the suggestion. We will definitely try to improve the clarity of this section in the next version using additional space (if the reviewer has some more specific suggestions, we will be glad to hear them).
---
Rebuttal Comment 1.1:
Comment: I thank the authors for their response. Looking at Figure 1 in the additional pdf, it seems that improvements over baselines offered on the bigger Amazon-670K dataset are minor compared to the smaller ones. I am still not entirely convinced about the utility of macro measures in isolation for XMC scenarios but I do think it can have applications in some specific cases and the proposed approach and its modifications (which interpolate between macro and instance-wise precision) is a reasonable way of solving the problem.
---
Reply to Comment 1.1.1:
Title: Re: Official Comment by Reviewer ELgR
Comment: We thank the reviewer for reading our responses and for the comment.
The improvements on Amazon-670K are indeed generally small (in most cases, both on the plots as well as in the tables presented in the main paper). Nevertheless, we would like to remark that the results for the Wikipedia-500K dataset, given in the Appendix, indicate more substantial improvements.
We appreciate that the reviewer admits that our approach can be an interesting option for some applications. | Rebuttal 1:
Rebuttal: We thank the reviewers for their thorough comments and questions. We answer them in specific responses to the reviewers. We use the global comment to attach a PDF with additional plots containing results for “meta-measures” that combine instance and macro-averaged measures. These plots have been prepared as a part of the response to Reviewer ELgR. A detailed response has been given in the specific response to this Reviewer. In the PDF, we also include a table with the running time and the number of iterations of the proposed inference method in addition to the response to Reviewer 4ZEa and 4FGQ.
Pdf: /pdf/e66a54ca71c7b297933904d7281b47dc78c6af35.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: This paper discusses evaluation metrics to measure the long-tail labels' performance of the extreme multi-label classification (XMLC) problems. The author proposed that macro-average based metrics (e.g., macro-avg Precision/Recall/F1 at k) are more suitable to measure the performance of long-tail labels compared to conventional instance-wised precision/recall at k. The author further discuss how to optimize these metrics on a given test set, with theoretical justification on the objective functions and deviation bounds. The empirical results verified the effectiveness of their proposed optimization methods for the given target metric.
Strengths: - Leverage the ETU framework to approximately optimize Macro-average metrics seems novel
- Theoretical justifications sounds reasonable and technical derivations are solid
Weaknesses: - Selecting thresholds to optimize Marco-average metrics are not new in multi-label community. The author seems not aware of some classical/heuristic approaches. For example, the sCut and SCutFBR methods studied in [1,2,3]. The author should also compare these methods in the experiment sections.
**Reference**
- [1] Y. Yang. A Study on Thresholding Strategies for Text Categorization. SIGIR 2001
- [2] Lewis et al. RCV1: A New Benchmark Collection for Text Categorization Research. JMLR 2004
- [3] Fan et al. A Study on Threshold Selection for Multi-label Classification. Technical Report 2007
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: **Major Questions**
- Q1: How does the proposed method relate and connect to other conventional threshold tuning approaches (e.g., [1,2,3])?
- Q2: How does the proposed method compare empirically to other conventional threshold tuning approaches (e.g., [1,2,3])?
- Q3: For the block coordinate ascent algorithm, how sensitive is it to the initial point? If it is a non convex function, I suppose the initialization problem matters? Any better heuristic than "predicting k random labels" that described at line 185?
- Q4: For Line 195-198, monotonicity alone is not enough to ensure convergence [4]. Any more thorough/technical statement for the convergence analysis of the proposed block coordinate ascent algorithm?
**Minor issues**
- In Eq(5), the first summation has index $i$ in the subscript, but not index $i$ seems not being used in the function of $u_w()$.
- At Line 154: "the proof is given in Appendix F". However, it seems to me that the proof appeared in Appendix G (Theorem G.4)?
**Reference**
- [1] Y. Yang. A Study on Thresholding Strategies for Text Categorization. SIGIR 2001
- [2] Lewis et al. RCV1: A New Benchmark Collection for Text Categorization Research. JMLR 2004
- [3] Fan et al. A Study on Threshold Selection for Multi-label Classification. Technical Report 2007
- [4] https://web.eecs.umich.edu/~fessler/course/600/l/lmono.pdf
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: To my knowledge, there's no potential negative societal impact for this submission.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely thank the reviewer for the review and questions. We appreciate the hard work. Below we address the main questions:
> *Selecting thresholds to optimize Marco-average metrics are not new in multi-label community. The author seems not aware of some classical/heuristic approaches. For example, the sCut and SCutFBR methods studied in [1,2,3]. The author should also compare these methods in the experiment sections.*
> *Q1: How does the proposed method relate and connect to other conventional threshold tuning approaches (e.g., [1,2,3])?*
We fully agree with the reviewer that unbudgeted macro-averaged measures are popular in multi-label problems. It is easy to notice that optimization of those metrics boils down to solving $m$ independent binary classification problems. Each such binary problem can be solved using one of two frameworks for optimizing complex performance measures. The methods from the cited papers can be seen as implementation of the optimal strategy for the Population Utility (PU) framework. For the Expected Test Utility (ETU) framework, the exact optimization on a test set for those measures is of quadratic complexity. An approximate solution can be obtained in linear time. For more detailed discussion on PU and ETU frameworks and optimization of macro- and micro-averages in multi-label classification we refer the reviewer to these papers: [4, 5, 6].
In this paper, we optimize macro-averaged measures with budgeted@k predictions in the ETU framework. This additional constraint is critical in many practical applications of extreme multi-label classification such as recommender systems and search engines. Unlike in the case of unbudgeted metrics, this constraint does not allow us to solve the problem for each label independently, and hence requires more sophisticated treatment. Therefore, our considerations are novel and not trivial.
---
> *Q2: How does the proposed method compare empirically to other conventional threshold tuning approaches (e.g., [1,2,3])?*
The conventional thresholding methods do not have the restriction on the number of predicted labels. Therefore, a direct comparison of these approaches is not straight-forward. Moreover, the conventional thresholding methods are designed for the PU framework. For ETU without the budgeted predictions, one would rather use different algorithms to find the threshold. For a detailed discussion please refer to [4].
**References for Q1 and Q2:**
1. Y. Yang. A Study on Thresholding Strategies for Text Categorization. SIGIR 2001
2. Lewis et al. RCV1: A New Benchmark Collection for Text Categorization Research. JMLR 2004
3. Fan et al. A Study on Threshold Selection for Multi-label Classification. Technical Report 2007
4. Dembczynski et al.: Consistency analysis for binary classification revisited. ICML 2017.
5. Koyejo et al.: Consistent Multilabel Classification. NeurIPS 2015.
6. Kotłowski, Dembczyński: Surrogate regret bounds for generalized classification performance metrics. Machine Learning 2017.
---
> *Q3: For the block coordinate ascent algorithm, how sensitive is it to the initial point? If it is a non convex function, I suppose the initialization problem matters? Any better heuristic than "predicting k random labels" that described at line 185?*
In addition to predicting $k$ random labels as the initial point, we also tested top-$k$ labels with the highest marginals $\eta$. Both initialization methods gave very similar final results in all cases, so we omitted this experiment in the submitted paper. Also, please note that all the experiments were repeated 5 times with different random initial points. We reported the standard deviation in Tables 6 and 7. In most cases, the deviation is no more than 0.1%.
---
> *Q4: For Line 195-198, monotonicity alone is not enough to ensure convergence. Any more thorough/technical statement for the convergence analysis of the proposed block coordinate ascent algorithm?*
We thank for this interesting comment. For metrics taking values in a bounded range and $\epsilon > 0$, the algorithm can only make up to $O(1/\epsilon)$ steps in the `while’ loop, as each step must decrease the objective by at least $\epsilon$. However, even when $\epsilon=0$, the algorithm terminates after a finite number of steps. This is because each step must strictly decrease the objective (otherwise, the algorithm terminates), which is only possible if the algorithm changes at least one prediction (from 0 to 1 or the other way around) among all instances and labels in that step. Since the number of possible predictions is finite (precisely, $\binom{m}{k}^n$), this guarantees that the algorithm will stop after a finite number of steps.
In practice, despite small $\epsilon$, the algorithm usually terminates after a few iterations (in the worst observed cases, it was a few tens of iterations). In Tables 6 and 7 in the Appendix, one can observe that the Greedy variant (one-pass algorithm) obtains good results in most cases, and based on that, larger improvements than $\epsilon$ are expected.
In Table 1 in the PDF attached to the global response, we also present the number of iterations and CPU time that was used for our inference procedures.
---
**Minor issues**
We thank for pointing out the typographical issues. We will correct them in the final version of the paper.
---
Rebuttal Comment 1.1:
Comment: I appreciate the author's response. I will keep my score unchanged. | null | null | null | null | null | null |
Ess-InfoGAIL: Semi-supervised Imitation Learning from Imbalanced Demonstrations | Accept (poster) | Summary: The paper presents a semi-supervised approach to InfoGAIL for learning from demonstrations that enables disentangled learned behaviors. The proposed approach outperforms all considered baselines in MuJoCo simulation in imbalanced settings, and is robust to very little labeled data as well as varying levels of imbalance in the dataset.
Strengths: * The theme of this paper in handling imbalanced demonstration data is timely and interesting.
* The paper is well written.
* The experiments diligently consider the potential relevant settings showcasing the capabilities of Ess-InfoGAIL in terms of handling imbalanced data, and a small fraction of labeled datapoints.
Weaknesses: * There are several points of confusion for me in terms of the setting considered. From Fig. 1, it is unclear what about the expert trajectories indicates imbalance. I might have missed this, but what are the considered labels in the experiments? How many label classes are there?
* The code is not available as of yet.
* Table 1 is missing some measure of statistical significance.
* In line 263, two numbers are listed twice. Why is the improvement over both ablation variants the same for both cases?
* In Fig. 5, it seems that 0.5% labeled data achieves good performance (above 90%) on both Reacher and Pusher tasks? This observation does not seem to agree with the discussion. Could both Fig. 5(a) and 5(b) have the same y-scale to make it easier to compare?
* The following is a non-exhaustive list of errors/typos and stylistic improvements I found:
1. Line 45: 'correspond' -> 'corresponds'.
2. Lines 79-93: missing space between words and citations. In general, citations should use 'abbrv' option in the bibliography.
3. Hanging section heading (4.1).
4. The references should be proofread (e.g., IEEE should not be listed twice in a citation, appropriate words (e.g., Bayesian) should be capitalized).
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: See above weaknesses section.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: The authors mentioned some limitations for the proposed method, although this section could use more depth.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: # Response to Reviewer dvkr
> Q1. From Fig. 1, it is unclear what about the expert trajectories indicates imbalance. What are the considered labels in the experiments? How many label classes are there?
Fig. 1 illustrates a simple 2D-Trajectory scenario, facilitating the visual comparison of algorithm performance. In the environment, an agent tries to mimic 4 expert trajectories on a plane, each represented by a different color to indicate distinct modes (styles) of expert behavior (a small portion of the trajectories are labeled with one-hot behavioral categories).
Fig. 1 (a) shows the expert trajectories, where the red and green trajectories are predominant, while the blue and purple trajectories only represent a small portion. Our Ess-InfoGAIL algorithm can successfully imitate distinguishable behavior styles, while GAIL and InfoGAIL can only imitate the more abundant red and green expert trajectories, and they cannot differentiate between trajectory styles. The corresponding descriptions have been added to Fig. 1. For more detailed information on expert demonstrations, please refer to Common Response A2.
> Q2. The code is not available as of yet.
A2. To enhance the readability of the code, we have organized it and uploaded it to an anonymous GitHub repository. In accordance with the rebuttal policy, which prohibits the inclusion of links to external pages in posted content, we have provided an anonymized link to the Area Chair (AC) in a separate comment. The code will be released to the public promptly after the paper's publication.
> Q3. Table 1 is missing some measure of statistical significance.
A3. Thank you for your valuable comment. Tables 1, 2, and 3, along with error bars (standard deviation) of the data, have been incorporated into the Appendix.
> Q4. In line 263, two numbers are listed twice. Why is the improvement over both ablation variants the same for both cases?
A4. Thank you for your valuable feedback. This is mainly due to the precision setting of the data. In the revised version, we have uniformly kept three decimal places throughout the manuscript to avoid unnecessary confusion.
> Q5. Fig. 5 does not seem to agree with the discussion. Could both Fig. 5(a) and 5(b) have the same y-scale to make it easier to compare?
A5. Thank you for your valuable advice. We have adjusted Fig. 5, and revised the discussion in Section 4.2 to eliminate any ambiguity.
> Q6. A non-exhaustive list of errors/typos and stylistic improvements are provided.
A6. Thank you very much for providing your valuable suggestions. We have carefully corrected the minor errors in the paper and incorporated the suggested changes.
---
Rebuttal Comment 1.1:
Title: Follow-up to Reviewer dvkr
Comment: Dear Reviewer, we would like to ask if your concerns have been addressed by our responses and supplementary materials. Thank you for your time. | Summary: This paper focuses on semisupervised imitation learning with imbalanced data. Mainly, the approach extends InfoGAIL with a semisupervised learning architecture, inspired by ss-InfoGAN, where the latent variable is decomposed into a semisupervised part and an unsupervised part. The semisupervised part is defined as a learnable latent skill variable to deal with the imbalance in the data. Regularises Information Maximisation with a label prior is further used to improve the robustness against different models of behaviour and limited data. The proposed method is evaluated on a toy example as well as high-dimensional control tasks in Mujoco, like humanoid.
POST-REBUTTAL COMMENTS
The authors have addressed my comments satisfactorily. They presented further experimental results. Therefore I have increased my score from "borderline reject" to "borderline accept".
Strengths: - Well-written paper with clear motivation and objectives. The authors have intention to share the code.
- Experiments are well designed to showcase the contribution. For example, the method is evaluated with different levels of imbalance in the data (Table 2) and with different numbers of behaviour modalities (Table 3).
Weaknesses: - Some of the arguments in the paper are not clear to me. Looking at figure 1, do we want the agent to imitate the expert's different behaviour styles? or do we want agent to do the task? Why is learning different modes of behaviour important in this context? In addition, how is this achieved in the Mujoco environments? For example, how are different expert behaviours generated in the reacher environment? Why is classification relevant in this context?
- Experimental results are limited. Why are the results presented in terms of NMI and ENT only? How do we know the task performance? Expected cumulative reward should be presented or a video supplementary is needed to show that it is actually working. [56] is an image representation learning method, it doesn't make sense to use the same metrics in this paper where the goal is imitation learning. In addition, experiments regarding degree of data imbalance and learning more modalities are shown only in the reacher environment. We don't know whether the findings can be generalised to different environments.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: - What is the amount of data used in the experiments? Only the percentages of labelled data have been provided. However, still 0.1% can be too much, if the number of total samples is large, like 1M. Explain clearly how the experiments were designed. Also present the cumulative rewards to demonstrate the effective of the proposed method.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The authors briefly discussed the limitations. Potential negative impact does not apply.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: # Response to Reviewer 18Ku
> Q1. Confused about Fig. 1. Do we want the agent to imitate the expert's different behaviour styles? Why is learning different modes of behaviour important in this context? How is this achieved in the Mujoco environments? Why is classification relevant in this context?
A1. (1) Fig. 1 illustrates a simple 2D-Trajectory scenario, facilitating the visual comparison of algorithm performance. In the environment, an agent tries to mimic 4 expert trajectories on a plane, each represented by a different color to indicate distinct modes (styles) of expert behavior. Fig. 1 (a) shows the expert trajectories, where the red and green trajectories are predominant, while the blue and purple trajectories only represent a small portion. Our Ess-InfoGAIL algorithm can successfully imitate distinguishable behavior styles, while GAIL and InfoGAIL can only imitate the more abundant red and green expert trajectories, and they cannot differentiate between trajectory styles. The corresponding descriptions have been added to Fig. 1.
(2) In the MuJoCo environment, taking Reacher as an example, the expert behavior styles can be the robot arm reaching different target points. In all experiments, we first pre-train K expert policies. Then the agents interact with the environment based on the trained policies to sample K expert trajectories with different styles as training data. In addition, to imitate real world motion capture data, the final expert data used for training is obtained by randomly sampling from the K expert trajectories. Please refer to Common Response A2 for more details.
(3) Our method can learn disentangled multi-modal behaviors from raw imbalanced expert data, utilizing only limited labeled data to guide the classification. This enables the policy network to focus more on learning representations related to behavior categories. Therefore, semi-supervised behavior style classification plays a crucial role in our approach. Please refer to Common Response A1 for more details.
> Q2. Experimental results are limited. Expected cumulative reward should be presented. NMI and ENT metrics are not necessary. Experiments regarding degree of data imbalance and learning more modalities are shown only in the Reacher environment.
A2. (1) Thank you for your valuable suggestion. We have supplemented the average reward in both the Experiments section of the main paper and the Appendix. All the results demonstrate the advantages of the proposed method. Please refer to Common Response A3 for more details. Furthermore, as this work focuses on semi-supervised imitation learning, a majority of the training data are unlabeled. In this context, NMI and ENT remain crucial metrics to assess whether the agent has learned distinct behavior modes. Therefore, we retain them in the main paper.
(2) Reacher serves as a relatively convenient environment for evaluating the impact of data imbalance and the number of behavior modes. However, in other MuJoCo environments like Humanoid, generating expert policies with diverse behaviors (e.g., walking, jumping, crouching, etc.) through manual reward engineering is challenging (since we need to pre-train the expert policy). Nevertheless, such behavior data can be easily obtained using motion capture technology in the real world. In our future work, we will consider directly utilizing motion capture data to learn richer human behavior modes, as demonstrated in reference [1].
[1] Peng X B, Guo Y, Halper L, et al. Ase: Large-scale reusable adversarial skill embeddings for physically simulated characters[J]. ACM Transactions On Graphics (TOG), 2022, 41(4): 1-17.
> Q3. Explain clearly how the experiments were designed in Amount of Labeled Data section. Also present the cumulative rewards to demonstrate the effective of the proposed method.
A3. (1) Thank you for your valuable comment. If not specified otherwise, the default label ratio for the environments is as follows: 2D-Trajectory: 1%, Reacher: 0.5%, Pusher: 1%, Walker: 1%, Humanoid: 2%. For instance, in the Reacher environment, 0.1% of labeled data corresponds to 2 episodes (100 time steps) of each behavior mode. The labeled data used for training is very limited, covering only a few episodes in each environment of each behavior mode. More comprehensive details have been included in Section 4.2.
(2) Thank you for your valuable suggestion. We have supplemented the average reward in both the Experiments section of the main paper and the Appendix. All the results demonstrate the advantages of the proposed method. Please refer to Common Response A3 for more details.
We hope that the above response has addressed your concerns.
---
Rebuttal Comment 1.1:
Title: rebuttal acknowledgement
Comment: dear authors, this is just to let you know I read other reviews and responses. I do not require any further information at this stage, and need some time to process. if I need further information, I will touch base during the weekend. thanks.
---
Reply to Comment 1.1.1:
Title: Response to Reviewer 18Ku
Comment: Thank you for your acknowledgment and great efforts in helping us improve our work. We look forward to hearing more of your valuable insights and suggestions.
---
Reply to Comment 1.1.2:
Title: Follow-up to Reviewer 18Ku
Comment: Dear Reviewer, we wonder if you have any futher concerns regarding all responses. Thank you for your time. | Summary: # Problem Statement
The paper addresses the problem of imitation learning in the context of real-world demonstrations that often present challenges such as multimodality, data imbalance, and expensive labeling processes.
# Main Contributions
The authors propose a novel semi-supervised imitation learning architecture, called Elastic semi-supervised InfoGAIL (Ess-InfoGAIL), that learns disentangled behavior representations from imbalanced demonstrations using limited labeled data. The method adapts the concept of semi-supervised generative adversarial networks to the imitation learning context, employs a learnable prior to align the generated and expert data distributions, and utilizes a regularized information maximization approach to further improve the semi-supervised learning performance. The authors demonstrate the efficiency of their method in challenging MuJoCo environments, showing that even in scenarios with highly imbalanced demonstrations, the policy can still reproduce the desired behavior modes.
# Methodology
The paper introduces a novel approach called Elastic semi-supervised InfoGAIL (Ess-InfoGAIL) for semi-supervised imitation learning from imbalanced demonstrations. The approach is based on three key improvements to InfoGAIL:
1. **Semi-supervised InfoGAIL**: The authors decompose the latent variable into a semi-supervised part and an unsupervised part. The semi-supervised part, denoted as '$c$', is a latent skill variable sampled from a categorical distribution, encoding the same information as the label '$y$'. The unsupervised part, denoted as '$ϵ$', is a latent shifting variable sampled from a continuous uniform distribution, allowing for style shifting within a given skill. The authors seek to maximize two mutual information terms $I(ϵ|s, a)$ and $I(c|s, a)$.
2. **Learnable Latent Skill Variable**: To align the state-action transitions produced by a policy $π_θ$ with the imbalanced expert demonstrations, the authors utilize a differentiable skill variable drawn from a Gumbel-Softmax distribution.
3. **Regularized Information Maximization (RIM) with an approximate label prior**: The authors leverage RIM with a label prior that approximates the learned latent skill distribution to make use of the intrinsic information in the unlabeled imbalanced data and improve the efficiency of semi-supervised learning.
The authors validate their method in a simple 2D trajectory environment for visualization and test it in four challenging MuJoCo environments.
# Experiments
The paper conducted experiments in a 2D trajectory environment and four MuJoCo environments (Reacher, Pusher, Hopper, and Humanoid) to validate the proposed method's ability to discover disentangled behavior representations from imbalanced demonstrations with limited labeled data. The experiments also analyzed the amount of labeled data required for the model to effectively encode the semantic meaning of the labels and the effect of varying the degree of data imbalance on the imitation of multimodal behaviors. The results demonstrated the efficiency of the proposed method, Ess-InfoGAIL, in learning multimodal behaviors compared to baseline methods such as GAIL, InfoGAIL, ACGAIL, and Elastic-InfoGAIL. The paper also conducted ablation experiments to verify the performance improvement of Gumbel Softmax and RIM techniques under the semi-supervised framework and imbalanced data.
Strengths: # Originality and significance
The work innovatively introduces semi-supervised learning to online adversarial imitation learning of multimodal behaviors by extending InfoGAIL with mechanisms proposed in Ess-InfoGAN. The imitation learning from demonstrations that contain distributional multimodality is a topic of great interest, and the approach shows promising performance.
# Quality
The algorithm derivation and experiments are solid.
# Clarity
Overall the article is clear, although some details are not provided.
Weaknesses: - Minor typo: The minimization should have $Q$ as a optimization variable in addition to $\pi$.
- The tasks in all experiments are not difficult due to the clear separation of the modes. As a reference, [1] shows that randomized search of the latent codes without encoder or posterior estimator can discern more subtly separated behavior modes in MuJoCo environments similar to those tested in this article (e.g. walker going forward and backward with various speeds).
- The evaluation metrics are limited. Only entropy and mutual information are reported, while for the environments and tasks appear in the article, there are commonly-acknowledged reward or score that can be used to quantify the quality of learned policy. Without those metrics reported, the quality of the policies is unclear to the readers.
- The implementation codes are not provided.
[1]: Vahabpour, A., Wang, T., Lu, Q., Pooladzandi, O., & Roychowdhury, V. (2022). Diverse Imitation Learning via Self-Organizing Generative Models. arXiv preprint arXiv:2205.03484.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: - In equation (9) and algorithm 1, which part corresponds to the learnable latent skill variable? What are the learnable parameters and what is their update in each iteration?
- What is $D_\text{body}$ in Figure 2?
- What is the label ratio in the data imbalance experiments described in section 4.3?
- How are the expert demonstrations generated?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The authors note that the limitations include the requirement of a small amount of labeled data for each category and the preset number of modal categories.
In addition, the limitation of low difficulty of tasks and the limited evaluation metrics are mentioned in the "Weakness" section.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: # Response to Reviewer fzCM
> Q1. Minor typo: The minimization should have value function as a optimization variable in addition to $\pi$.
A1. Thank you for your valuable suggestion. We have incorporated the optimization of the value function into Section 3.3 of the main paper.
> Q2. The tasks in all experiments are not difficult due to the clear separation of the modes. Taking [1] as a reference.
A2. (1) Learning distinguishable behavior modes from multi-modal data remains challenging, especially when the data size of each behavior mode is imbalanced.
(2) The provided reference [1] is an intriguing work that utilizes a generator model without an encoder for behavior cloning, capable of distinguishing and imitating different behavior modes. However, fully unsupervised searching for latent variables has been proven to be difficult, as indicated in [2], which is evident from the exhaustive search for latent variables in the procedure of [1]. This is also the motivation behind our introduction of semi-supervised learning. In addition, our method can efficiently learn both discrete behavior modes and continuous behavior styles (e.g., walking speed) without the need for exhaustive search. Please refer to Appendix C for a simple visualized example. It is promising to combine and complement our work with [1]. Unfortunately, the code for [1] is not available yet.
[1] Vahabpour A, Wang T, Lu Q, et al. Diverse Imitation Learning via Self-Organizing Generative Models[J]. arXiv preprint arXiv:2205.03484, 2022.
[2] Locatello F, Bauer S, Lucic M, et al. Challenging common assumptions in the unsupervised learning of disentangled representations[C]//international conference on machine learning. PMLR, 2019: 4114-4124.
> Q3. The evaluation metrics are limited. The average reward should also be provided.
A3. Thank you for your valuable suggestion. We have supplemented the average reward in both the Experiments section of the main paper and the Appendix. All the results demonstrate the advantages of the proposed method. Please refer to Common Response A3 for more details.
> Q4. The implementation codes are not provided.
A4. To enhance the readability of the code, we have organized it and uploaded it to an anonymous GitHub repository. In accordance with the rebuttal policy, which prohibits the inclusion of links to external pages in posted content, we have provided an anonymized link to the Area Chair (AC) in a separate comment. The code will be released to the public promptly after the paper's publication.
> Q5. In equation (9) and algorithm 1, which part corresponds to the learnable latent skill distribution? What are the learnable parameters and what is their update in each iteration?
A5. (1) The latent skill variable $\mathbf{c}$, concatenated with the state, serves as an input to the policy, and the learnable latent skill distribution $p(\mathbf{c})$ is updated along with the policy. Specifically, the objective function of $p(\mathbf{c})$ aligns with the policy, which aims at minimizing the first two parts of $V_{Ess-InfoGAIL}$ in Equation 9.
The revised objective function of Ess-InfoGAIL in Equation 9 is defined as follows:
$\min_{\pi, Q^{\epsilon}, Q^{\mathbf{c}}, p(\mathbf{c})} \max_{D}(V_{InfoGAIL} - \lambda_{2}L_{IS} - \lambda_{3}L_{RIM})$
And the update of $p(\mathbf{c})$ is as follows:
$p_{i+1}(\mathbf{c})=\arg\min_{p(\mathbf{c})}(V_{InfoGAIL} - \lambda_{2}L_{IS})$
We have revised Algorithm 1 and the description of Equation 9 to enhance readers' understanding.
(2) In our model, the learnable parameters include $\theta, \beta, \phi, \psi, p(\mathbf{c})$, corresponding to the parameters of the policy, value function, discriminator, encoders and the latent skill distribution. Their update processes in each iteration are detailed in the revised Algorithm 1.
> Q6. What is $D_\text{body}$ in Fig. 2?
A6. $D_{\text{body}}$ serves as a shared backbone network for the discriminator $D_{\phi}$ , and the encoders $Q_{\psi}^{\epsilon}$ and $Q^{\mathbf{c}}_{\psi}$ , responsible for extracting features from state-action pairs. We have added explanations in Fig. 2 and included the architecture diagram of the backbone network in the Appendix.
> Q7. What is the label ratio in the data imbalance experiments described in section 4.3?
A7. In the data imbalance experiments of Section 4.3, the label ratio is set to 0.5%, where 0.1\% of the data corresponds to 2 episodes (100 time steps) of each behavior mode in the Reacher environment. We have included relevant descriptions in Section 4.3. Specifically, if not otherwise specified, the default label ratios are as follows: 2D-Trajectory: 1%, Reacher: 0.5%, Pusher: 1%, Walker: 1%, Humanoid: 2%. The corresponding descriptions have also been added to Section 4.2.
> Q8. How are the expert demonstrations generated?
A8. To collect imbalanced multi-modal demonstrations, we first train an expert policy for each mode. Subsequently, based on these expert policies, the agent interact with the environment to collect expert trajectories. Finally, we extract data from these trajectories in different proportions to create imbalanced multi-modal demonstrations. Please refer to Common Response A2 for more details.
---
Rebuttal Comment 1.1:
Title: Follow-up to Reviewer fzCM
Comment: Dear Reviewer, we would like to ask if your concerns around the related work and the evaluation metrics have been addressed in the author response. Thank you. | Summary: The paper introduces a semi-supervised imitation learning architecture that addresses challenges associated with real-world demonstrations, such as multimodality, data imbalance, and expensive labeling processes. The proposed method utilizes three key components: adapting semi-supervised generative adversarial networks to the imitation learning context, employing a learnable prior to align generated and expert data distributions, and utilizing a regularized information maximization approach along with the learned prior to enhancing semi-supervised learning performance. Experimental results highlight the effectiveness of the proposed method in learning multimodal behaviors from imbalanced demonstrations, outperforming baseline methods.
Strengths: * The idea of this paper is well-motivated and the investigated problem is practical.
* Empirical results show great improvement over baseline and other compared methods. The visualized results are good to demonstrate the efficiency
Weaknesses: * The novel seems to be limited. This paper seems to be an extension of ss-InfoGAN [1] in the imitation learning fields. And the authors do not fully discuss the difference between ss-InfoGAN and the proposed method.
* The quality of the paper can be improved. There seem to be a lot of types in the paper. For example, the missing reference of the figure in line 208, 212, 216. Moreover, I also can not find the definition of $D_body$ in Figure 2, which makes it hard to understand the framework of the whole algorithm.
[1] Spurr, Adrian, Emre Aksan, and Otmar Hilliges. "Guiding infogan with semi-supervision." Joint European Conference on Machine Learning and Knowledge Discovery in Databases. Cham: Springer International Publishing, 2017.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: * Please provide more details about how to collect imbalanced multi-modal demonstrations used for training.
* I am also confused about the evaluation metrics. It seems that the authors only measure the distance between agent demonstrations and multi-modal expert demonstrations. However, the average reward, which is a basic evaluation metric should also be provided for comparison.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 2 fair
Limitations: The authors have briefly discussed the limitations of the paper in the last section.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: # Response to Reviewer iRnn
> Q1. The novel seems to be limited. More discussions with ss-InfoGAN needs to be added.
A1. Indeed, we draw inspiration from ss-InfoGAN and extended it to the imitation learning framework, addressing key issues that still persist in the field of imitation learning, such as multi-modal behavior, data imbalance, and expensive labeling processes. The main differences from ss-InfoGAN are as follows: 1. We focus on imitation learning under sequential decision-making tasks, and 2. We introduce a learnable latent skill distribution and an improved RIM approach to tackle critical challenges commonly present in imitation tasks based on a substantial amount of imbalanced raw data. Relevant discussions have been added to section 3.2. Please refer to Common Response A1 for more details.
> Q2. Some minor errors need to be corrected. The definition of $D_\text{body}$ in Fig. 2 is not clear.
A2. (1) Thank you for your helpful comments. We have rectified these minor errors.
(2) $D_{\text{body}}$ serves as a shared backbone network for the discriminator $D_{\phi}$ , and the encoders $Q_{\psi}^{\epsilon}$ and $Q^{\mathbf{c}}_{\psi}$ , responsible for extracting features from state-action pairs. We have added explanations in Fig. 2 and included the architecture diagram of the backbone network in the Appendix.
> Q3. How to collect imbalanced multi-modal demonstrations used for training?
A3. To collect imbalanced multi-modal data, we first train an expert policy for each mode. Subsequently, based on these expert policies, the agent interact with the environment to collect expert trajectories. Finally, we extract data from these trajectories in different proportions to create imbalanced multi-modal data. Please refer to Common Response A2 for more details.
> Q4. The average reward should also be provided for comparison.
A4. Thank you for your valuable suggestion. We have supplemented the average reward in both the Experiments section of the main paper and the Appendix. All the results demonstrate the advantages of the proposed method. Please refer to Common Response A3 for more details.
---
Rebuttal Comment 1.1:
Title: Follow-up to Reviewer iRnn
Comment: Dear Reviewer, we wonder if your concerns around the data collection and novelty of our method have been addressed in the author response. Thank you. | Rebuttal 1:
Rebuttal: # Common Response
We are thankful to the reviewers for their valuable feedback. We first address the comments that are common to multiple reviewers and then response to the reviewers individually.
> Q1. The significance of this work.
A1. Our method draws inspiration from semi-supervised GANs and extends it to the multi-modal imitation task under imbalanced data, which plays a crucial role in enabling agents to effectively imitate real-world demonstrations. By utilizing the proposed Ess-InfoGAIL, high-quality multi-modal behaviors can be learned from a large amount of **raw expert demonstrations** without the need for:
* Labeling behavior modes (e.g., human walking, running, jumping, etc.) for each data point.
* Extracting independent segments of behavior modes.
* Establishing a balanced distribution of data among behavior categories.
All we require is a very limited amount of labeled data for learning guidance. These advantages are currently lacking in the majority of existing imitation learning algorithms.
> Q2. How to collect multi-modal demonstrations?
A2. In our experiments, we first pre-train K expert policies, each corresponding to K different goals (or K behavior modes). Subsequently, we use these K expert policies to sample K sets of expert demonstrations. From each set of expert demonstrations, we extract a small portion and label them with the one-hot behavioral categories, while the remaining expert demonstrations are randomly sampled and mixed to create imbalanced unlabeled expert data.
Moreover, in the real-world scenario, one can directly use the raw motion capture data (e.g., motion capture data of an animal over a day) without the need to train additional expert policies. The collection of multi-modal demonstrations has been clarified and added in Section 4 of the main paper.
> Q3. The average reward needs to be provided.
A3. We sincerely appreciate the valuable suggestions from the reviewers. We have incorporated relevant discussions of the average reward in section 4.1 of the main paper. The average reward of Reacher environment during the training process is added in Fig. 5. Due to space constraints, the average reward of other environments have been included in the Appendix. Furthermore, the normalized average reward quantification table, encompassing the outcomes of all algorithms, has been included in the Appendix, presented below. The results are also included in an additional single page PDF file. All the results demonstrate the advantages of the proposed method.
| | 2D trajectory | Reacher | Pusher | Walker-2D | Humanoid |
|---|---|---|---|---|---|
| GAIL | 0.071$\pm$ 0.019 | 0.189$\pm$ 0.087 | 0.338$\pm$ 0.029 | 0.459$\pm$ 0.014 | 0.508$\pm$ 0.033 |
| InfoGAIL | 0.155$\pm$ 0.028 | 0.223$\pm$ 0.054 | 0.431$\pm$ 0.043 | 0.552$\pm$ 0.059 | 0.526$\pm$ 0.127 |
| ACGAIL | 0.540$\pm$ 0.061 | 0.616$\pm$ 0.131 | 0.790$\pm$ 0.042 | 0.703$\pm$ 0.040 | 0.614$\pm$ 0.040 |
| Elastic-InfoGAIL | 0.189$\pm$ 0.059 | 0.261$\pm$ 0.079 | 0.483$\pm$ 0.042 | 0.616$\pm$ 0.039 | 0.544$\pm$ 0.111 |
| Ess-InfoGAIL$\backslash$GS | 0.845$\pm$ 0.023 | 0.868$\pm$ 0.050 | 0.867$\pm$ 0.040 | 0.812$\pm$ 0.050 | 0.817$\pm$ 0.017 |
| Ess-InfoGAIL$\backslash$RIM | 0.882$\pm$ 0.036 | 0.826$\pm$ 0.035 | 0.772$\pm$ 0.061 | 0.770$\pm$ 0.019 | 0.751$\pm$ 0.065 |
| Ess-InfoGAIL (Ours) | **0.956$\pm$ 0.040** | **0.933$\pm$ 0.033** | **0.967$\pm$ 0.022** | **0.911$\pm$ 0.042** | **0.920$\pm$ 0.036** |
It is important to note that we make modifications to the original MuJoCo environment, and the computation of the task reward varies with different target behavior modes. We take the average value across all behavior modes as the final average task reward, and this task reward is only used as an evaluation metric and does not participate in the policy training process. Due to the issue of mode collapse and data imbalance, some methods (e.g., GAIL and InfoGAIL) may end up learning only one or two behavior modes, resulting in average task rewards across all modes that could be lower than those achieved by using a random policy.
> Q4. The implementation codes are not available.
A4. To enhance the readability of the code, we have organized it and uploaded it to an anonymous GitHub repository. However, in accordance with the rebuttal policy, which prohibits the inclusion of links to external pages in posted content, we have provided an anonymized link to the Area Chair (AC) in a separate comment. The code will be released to the public promptly after the paper's publication.
> Q5. Minor errors.
A5. We have corrected minor errors in the paper and have incorporated the suggested changes.
Please let us know if there are any remaining questions!
Pdf: /pdf/8d9e463d2c43f227bf0fdfd0e3b2a2073a1b1626.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Auxiliary Losses for Learning Generalizable Concept-based Models | Accept (poster) | Summary: The paper proposes multiple improvements to current paradigm about learning, application and evaluation of concept bottleneck models (CBMs) for by-design interpretable classification. They propose to improve supervised learning of concepts in CBMs through a concept-orthogonality loss (COL) that encourages samples from different classes to have orthogonal feature representations used for concept prediction and samples with same concepts to have similar representation. Addtionally, to improve classification performance of the model, they propose to predict class labels additionally from this representation. In terms of usage of CBMs, the authors propose to incorporate supervisors confidence when making decisions about intervention. In terms of evauation they propose to also evaluate baselines under distribution shifts.
Strengths: Strengths:
(a) The paper is well written in large parts. The organization is fluid and comfortable to follow. The motivations for the proposals are also generally clear.
(b) The experiments are solid and extensive with multiple different datasets, baselines and experimental settings.
(c) The paper contains multiple novel contributions in terms of learning, usage or evaluation of CBMs.
The quality and clarity is high in most parts. Considering all the contributions as a whole, the paper is reasonably original and moderately significant in my opinion.
Weaknesses: Weaknesses:
(a) Each individual contribution is limited in either novelty or impact. For instance the concept-orthogonality loss (COL) is the most novel contribution but I wouldn't consider the improvement in concept/model accuracy (Tables 1-4) highly significant. On the other extreme while I'd consider evaluation of all baselines under distribution shifts useful for community, I won't regard it as highly original. This weakness sets an upper bound in terms of strength of this paper for me. That being said, considering all contributions together, this won't be a determining factor for me in terms of recommending acceptance or rejection.
(b) The experiments studying impact of loss weights are lacking in certain ways. More details about this in section for questions.
(c) Statements and reasoning around concept orthogonality-loss (COL) is sometimes confusing and potentially misleading to me. This is the most significant concern. Please refer to questions/concerns below for more details about this.
Overall, I enjoyed reading the paper. Moreover, the paper as a whole should be useful for the ML community. However because of some major doubts regarding discussion around COL which is the most crucial idea of the paper, I can only favour the paper for borderline acceptance currently.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Questions:
(a) Loss weights: Among the loss weights to be set, $\gamma$ and $\lambda$ relating to COL seemed most crucial to me. The choice of $ \lambda=0.05 $ seems a little arbitrary. Could you explain how did you arrive at this value whether it be through a heuristic or otherwise? Also, is the system behaviour sensitive to choice of $\lambda$? How is the learning affected if its value changes? The experiment studying hyperparameters (appendix E) should also study role of $\lambda$.
(b) Statements surrounding COL: I got confused 2-3 times in understanding COL due to the statements made about it. Before going into the specific instances, I want to confirm that my understanding is accurate. Please correct me if I am mistaken in the following statements.
1. $q$ is a single feature representation used to predict **all** the concept labels.
2. In eq (5) $c_i, c_j$ denote concept labels for sample $i, j$ in a batch. These are binary vectors of size number of concepts. Consequently, COL encourages $q_i, q_j$ to be orthogonal if **ANY** concept label differs between samples $i, j$. COL also encourages $q_i, q_j$ to align if **ALL** concept labels among $i, j$ match.
If both these statements are correct, then these are my questions/concerns:
1. Is differing of any concept label between sample $i, j$ equivalent to them belonging to different classes (line 188, Pg 5)? Although this is a completely reasonable proxy but is it possible that two samples from same class differ in just one concept label and thus COL regards them as dissimilar?
2. Multiple times in line 182-186 in context of COL similarity loss, you state that it encourages similarity between feature representation for samples from "same concept". Isn't this an inaccurate phrasing? The accurate phrase should be for samples with "same set of concept-labels".
3. In line 290-292, you also refer to COL loss as "maximizing the inner product between the concept embeddings of different concepts". Firstly, it should be 'minimizing'? Also there is only one concept embedding $q$, right? Reading this statement combined with ones in previous point was particularly confusing as it conveys a meaning of orthogonality differently imposed copmared to COL. Namely, one could encourage different feature maps in $q$ to be used in prediction of different concepts, even for a single sample. This would more accurately represent "separating" representations of different concepts. But that is not what COL proposes.
4. Lastly, following the previous point, doesn't COL have some disadvantages in terms of imposing orthogonality for learning concepts? To be more specific, as an example consider two samples of similar classes. They are very likely to share many concepts (but not all). Isn't it sensible that the feature representation be similar as they share many concepts? The COL loss on the contrary encourages the feature representations to be completely dissimilar. Isn't it more sensible for dissimilarity loss to be applied **ONLY IF ALL** the concept labels present in two samples differ? If indeed this is the case, it needs to made very explicit.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The authors discuss the limitations in appendix. The content of the discussion covers some key points and is adequate for me.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We would like to thank the reviewer for their feedback.
> Loss weights: experiments around $\lambda$
We experimented with different loss weights for $\lambda$ in our experiments and the model+COL seemed to be fairly robust with different values of $\lambda$. We have put those results in Table 5 of the rebuttal PDF on CUB and TIL datasets. While a fine-tuned value of $\lambda$ might show good performance, we observe that regardless, the model is still able to beat the performances of other baselines. We observed that 0.05 set as a good tradeoff between performance and uncertainty across datasets.
> Definition of COL confusion.
Upon second look, we can see why the definition of COL could lead to confusion, we would like to make a small change to the equation and update it. Hopefully, this can alleviate some of the concerns.
$d_1=\sum_{\substack{i,j\in B,\\\\c_i^a = c_j^a\\\\a\in A}}\frac{q_i^T q_j}{||q_i|| ||q_j||};d_2=\sum_{\substack{i,j\in B,\\\\ c_i^a\neq c_j^a \\\\a\in A}} \frac{q_i^T q_j}{||q_i||||q_j||}$ where A is total concepts.
> $c_i$ and $c_j$
With the clearer definition, we would rename $c_i$ and $c_j$ indeed represent the binary concept vectors across the batch $i$ and $j$, while $c_i^a$ and $c_j^a$ are individual concept values for a given concept across the batch.
>Is differing of any concept label between sample $i$, $j$ equivalent to them belonging to different classes (line 188, Pg 5)? Although this is a completely reasonable proxy but is it possible that two samples from same class differ in just one concept label and thus COL regards them as dissimilar?
We understand that it might have been confusing if we look at Eqn 5, but looking at the revised version, we can see that it aligns with L188. Here the concept representation of two images from the same class is brought closer. It is also worth noting that datasets such as CUB and AwA2 have class-level concept representation.
>Multiple times in line 182-186 in context of COL similarity loss, you state that it encourages similarity between feature representation for samples from "same concept". Isn't this an inaccurate phrasing? The accurate phrase should be for samples with "same set of concept-labels".
Thank you for pointing this out. We will rephrase it as "concepts of the same class".
> L290-292 typo. This would more accurately represent "separating" representations of different concepts. But that is not what COL proposes.
We would like to apologize for the confusion. Indeed, L290-291 is a typo, it is minimizing not maximizing. Additionally, we would like to confirm that COL attempts to increase the separation between concepts by introducing the orthogonality constraint.
> Isn't it more sensible for dissimilarity loss to be applied ONLY IF ALL the concept labels present in two samples differ? If indeed this is the case, it needs to made very explicit.
Following your feedback, we have tried to make the COL definition more explicit mathematically. And dissimilarity loss, $d_2$ is applied in case of different image labels.
We thank the reviewer for constructive feedback. We ran the experiments and attempted to answer all of the concerns of the reviewer and would be happy to answer any further questions.
---
Rebuttal Comment 1.1:
Comment: I want to thank the authors for the rebuttal.
> With the clearer definition, we would rename $c_i$ and $c_j$ indeed represent the binary concept vectors across the batch $i$ and $j$, while $c_i^a$ and $c_j^a$ are individual concept values for a given concept across the batch.
Your description about the new equation doesn't add more clarity. I think $i$ and $j$ are indices of samples and not the batch. Similarly $c_i$ and $c_j$ should be binary concept vector for specific samples and not the concept vector "across the batch".
Your other references for $d_1$ and $d_2$, given below, give lot more clarity in this regard and align very close to my understanding.
> Here the concept representation of two images from the same class is brought closer.
> And dissimilarity loss, $d_2$ is applied in case of different image labels.
This essentially leaves one of my questions unanswered which is about the application of dissimilarity loss $d_2$. One can have two samples with different class labels which share many (but not all) concepts, for eg. bird species with blue heads and small size in CUB. Your described training would still apply dissimilarity loss between the two samples since their class labels are different. Why is it sensible to apply dissimilarity loss in this case even though multiple concepts can be shared between the two classes? Would it not lead to very inefficient use of the feature representation $q$, since many feature maps will be detecting the same concept for different classes?
---
Reply to Comment 1.1.1:
Title: Response to the reviewer's question
Comment: We would like to thank the reviewer for their response. Please find the answer to your query below.
Following your example, let us consider two different bird species, A and B with their respective binary concept vectors, $c_A = [1,0,1,0]$ and $c_B=[1,1,1,1]$. Now, let's suppose that these concepts denote attributes like "feathers," "beak," "wings," and "claws." We observe that the concept “feathers” and “wings” are the same for both samples. If our model were to learn shared feature representation, it would lead to softer (probabilistic) concept predictions. This, in turn, brings about what's known as concept leakage. This leakage fundamentally compromises the model's interpretability and trustworthiness because the concept predictor is no longer predicting the label on the basis of “hard” (0/1) concept information instead of probabilities. In the literature, [16,8] have observed that encoding of this soft information across concepts is not favored in CBMs since a supervisor cannot be sure if the model is predicting a concept because of its presence, or because it is encoding for something else. Additionally to limit the learning of correlations and leakage Havasi et al. [8] suggested that concept prediction must be independent. Hence shared representations are not ideal for concept representation despite their potential to improve task accuracy. Therefore in order to promote trustworthiness and robustness in concept explanation we propose to apply $d_2$ between $c_A$ and $c_B$. Using the dissimilarity cosine loss, we are able to create “disentangled” representations for concepts of each label.
We hope that our answer cleared the reviewer's question and concern. We would be happy to answer any further concerns. Additionally, if we have answered your question, we would be grateful if you can kindly consider increasing the score. | Summary: This paper proposes Coop-CBM, a concept bottleneck model (CBM) trained using a novel multi-task loss and orthogonality regularizer. The proposed losses discourage undesired leakage in a CBM's concept representations while encouraging a better balance of their accuracy and interpretability. By incorporating these loss terms in their objective functions, *Coop-CBMs* are encouraged separate latent representations learnt for different concepts while reducing the intra-concept distance. This design enables effective concept interventions in these models while maintaining high concept and task accuracies. The authors perform an extensive evaluation of Coop-CBMs on four real-world datasets for image classification tasks, together with corrupted variants, and show that the proposed method achieves higher accuracy in all instances, compared to black-box models and previous CBM-based models, while remaining receptive to test-time concept interventions.
Strengths: Thank you for the very well-written and interesting paper. I thoroughly enjoy reading this work, and I believe it is both addressing an interesting problem while having the potential to inspire different methods elsewhere. After carefully reading the paper and the accompanying appendix, I believe that its main strengths are:
1. Although the main idea in this paper is simple in nature, it is an interesting approach to alleviating some of the concerns that have been raised on the quality of learnt concept representations in CBMs, particularly regarding leakage.
2. The evaluation across four real-world datasets, as well as variations of these datasets with spurious correlations for OOD evaluation, makes the experiment section of this paper very well-designed. In particular, I believe there is a lot of potential impactful work in better understanding these models in OOD scenarios.
3. Given the need for interpretable architectures that are actually useful in practice, the core contributions of this work have the potential to be impactful if they stand thorough evaluation.
4. The main idea is not entirely novel, as it shares similarities to ideas previously proposed in disentanglement learning and contrastive learning. Still, its application to this space, and its evaluation in this space, is novel and worthy of study.
Weaknesses: While I believe the ideas in this work have future potential, I think there is significant room for improvement in how this work is presented and evaluated. Specifically, I believe the following are its main limitations:
1. I have some doubts about some of the claims made by the paper regarding the usability of its proposed method in concept-scarce (or concept-incomplete) settings. The results briefly discussed in passing in Section 5, and discussed in more detail in Section E of the Appendix, seem a bit suspicious to me without further clarification. This is because, intuitively, one would not expect Coop-CBM to perform significantly better than CBM-AR or Joint-CBM when the number of training concepts is very limited (as the bottleneck's capacity will simply be too constrained!). Please see the question related to this point in the section below to see where my suspicion arises from. Please feel free to correct me if this is a misunderstanding on my end.
2. Another serious weakness in this work, which I believe other reviewers may also point out, is the number of sheer hyperparameters the proposed loss has ($\alpha, \beta, \gamma, \lambda$ for the loss and even more for the intervention algorithm). This. together with the lack of an accompanying ablation study showing whether the results presented in this work would change if one fine-tunes these hyperparameters as well as those in competing methods, severely limits the robustness of the evaluation. The claim that “for fair comparison, the concept prediction weight (here $\alpha$) in our baseline models is set to the same value”(lines 248-249) is in fact hard to take at face value. This is because it is unclear why all other hyperparameters are set to $0.01$ and whether this happens to be a value that greatly benefits the presented method. Therefore, more clarity on this end, including a potential ablation study and an appropriate hyperparameter selection for competing methods, would make the claims in this paper much stronger.
3. A lot of the details on the intervention side of things are left undefined in the main body of the paper and moved to the appendix. Currently, most of the contributions this work makes on the side of interventions (e.g., the study of user uncertainty or the use of a new intervention policy for selecting concepts) seem completely orthogonal to the rest of the paper’s work/main motivation and therefore are hard to evaluate within the context of the same paper. Don’t get me wrong: I believe these are interesting results; it is just that they currently seem out of place with almost all details pushed to the appendix (see how much methodological content section F has) and with a very limited evaluation if this is a core contribution. For example, it is unclear whether Algorithm 1 is used to select concepts in the results of Figure 1 of the main paper. Based on the caption saying that concepts are randomly intervened, I believe this is not the case. If it is, then it is not a fair comparison to compare Coop-CBM with a non-trivial concept selection policy against baselines that select concepts at random. If it is not, then there is no evaluation of the proposed concept selection policy anywhere in the main body of the paper. Either way, further evaluation of the proposed policy or clarification of how it fits in with the rest of the work is required.
4. I am a bit unsure about what the main takeaways of the experiments in Section 5.3 are that are not offered by previous experiments. Clarification on this end could help a lot.
5. The codebase provided is incomplete (I could not find some of the baselines) and missing key comments or pseudo-documentation. I am not expecting production-quality documentation in any way but at least a README to guide readers through it would be very useful.
6. Against traditional convention, and the authors' answer to the questionnaire question on error bars, Figure 1 and Tables 3, 4 and 5 are all missing error bars.
7. Although in the questionnaire the authors indicate the compute resources and licenses are stated in the paper/supplementary material, I was unable to find either of these.
8. Similarly, a lot of experimental/reproducibility details are missing from their codebase and their paper’s supplementary material, severely limiting the possibility of reproducing these results.
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: After carefully reading the paper and the appendix, I had some trouble getting convinced by some of its results and proposed methods. My score is mostly influenced by this work's (1) lack of proper evaluation against existing baselines (specifically in how hyperparameters are selected), (2) its possibly counterintuitive results when the set of concept annotations is incomplete (and the lack thereof of an explanation for such unexpected results), and (3) the large amount of missing of details/explanations in some crucial components of this work that may lead to reproducibility issues. However, I think my concerns may be addressed by successfully addressing these questions:
1. If you look at Figure 3 of [1], you can see an evaluation of interventions in the method I believe this work calls “CBM-AR” (called “Hard AR w/o side-channel” in [1]). In that figure, it is clear that the task accuracy of CBM-AR is significantly low when one uses a small number of concepts during training (as one would expect as the model’s bottleneck is extremely constrained). However, in the results shown in Table 1 of the Appendix, the same drastic change in accuracy is not observed for CBM-AR (with only a drop of ~8% in task accuracy when using 10% of all concept groups). This makes me a bit suspicious of all of the results in this table. Therefore, I was wondering the following:
1. When the paper says, for example, that 10% of concepts were used as annotations during training that means does that mean that the same group of 10% of training concepts were given for all samples of the training set **or** that, for any given sample, 10% of the concepts were selected at random and provided with annotations without regards of what concepts were provided to other training samples? That is, are concepts being subsampled at a "global" scale (where all samples are provided with the same subsample of concept annotations) or at a local scale (where each sample may have different concept annotations than other samples)? My guess is that it is the former (global scale subsampling) but I want to confirm this as it is unclear from the text.
2. If it is the former as I believe, then is the paper using the side channel version of “Hard AR” when evaluating CBM-AR? If so, this should be made explicitly clear somewhere as no details could be found.
3. **More importantly**, could you please clarify what is the intuition behind Coop-CBM having such a high accuracy when one uses a small number of training concepts (say 10%)? If the prediction used for the downstream task is that given by $g(f(x))$, then **I find this result extremely surprising** given that Coop-CBM is still, just like its traditional CBM counterpart, severely constrained at the bottleneck when one uses a small number of concepts (and one would thus expect an even more severe drop in performance). One possible way I can see one can achieve this result is if the prediction used to compute the accuracy reported in Table 1 of the appendix is to use instead the immediate prediction made by $h$ (i.e., $h(f(x))$) as the output prediction of Coop-CBM. If that is the case, though, then this goes against how the model is originally defined in Section 3.2 and, more importantly, would mean that interventions are meaningless for this process as the prediction is bypassing the concept bottleneck. One convincing piece of evidence that your model is indeed working as expected (i.e., by predicting the label given by $g(f(x))$) is to show that it still positively reacts to interventions when the number of concepts is small (say 10%).
2. How do the results reported in some of the experiments change if one selects the hyperparameters for competing methods using a simple hyperparameter search (say over 0.1, 1, 10) for the concept loss weight in Joint-CBMs, CEMs, and CBM-ARs?
3. Could you please clarify the intervention concerns raised in the weaknesses? That is, how are interventions evaluated in Figure 1 and what is the relationship between these contributions and the rest of contributions? Are the contributions orthogonal to each other?
4. In equation (5), it is still a bit unclear what $q_i$ is. Could you please clarify this?
5. In section 5.1, why does 30% of the test set still have a spurious correlation? What is the rationale behind not removing the spurious correlation in its entirety to see how much the model truly depended on the shortcut for its predictions?
Regarding suggestions for the presentation, the paper is very well-written (thank you!). Nevertheless, the following possible errors and typos could be addressed before a camera-ready version:
- Typo in line 49: it should be “Mahinpei et al. [16] have shown” rather than “Mahinpei et al. [16] has shown”.
- Typo in line 96: parenthesis is accidentally underscripted in $f(c|x)$.
- Typo in line 127: missing “the” between “in” and “concept predictor”.
- Typo in equation (3): I believe the last term should be $\mathcal{L}_y(g(f(x)), y)$ rather than $\mathcal{L}_y(g(c), y)$.
- Typo in line 196: the sentence started in this line is a bit confusing (possibly missing a couple of words).
- This came out earlier this year, so I think it is ok to miss the reference. However, for the sake of completeness, it is worth noting that recent work [2] does consider human error/uncertainty when performing interventions as opposed to the claim in lines 299-300. This is not a big deal as it happened within a few months before NeurIPS’ deadline, but it is worth noting for a future camera-ready version of the manuscript.
- nit: ChatGPT and BARD are missing citations.
- nit: “XAI” in line 26 is never explicitly defined.
- The citation to “[1]” in the paper’s references of the main text is incomplete (the SENN paper reference).
- I couldn’t find anything related to the “Oracle Impurity Score” in the original CEM paper as cited in the Appendix in line 160. A quick search for that term found that it is defined in a separate paper [3].
- Section 3 could benefit from a diagram explaining the overall training process and architecture.
- This is pure nitpicking on my end, but I am still unsure what the “coop” part of “Coop-CBM” stands for as it is never explained/defined.
### References
[1] Havasi et al. "Addressing leakage in concept bottleneck models." *NeurIPS 2022*.
[2] Collins et al. "Human Uncertainty in Concept-Based AI Systems." *arXiv preprint arXiv:2303.12872* (2023).
[3] Espinosa Zarlenga et al. "Towards robust metrics for concept representation evaluation." AAAI 2023.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 3 good
Contribution: 2 fair
Limitations: Although included in the appendix, this work provides a discussion of some of the limitations and potential impacts of its work. If possible, the manuscript could benefit from including such a discussion in the main body of the paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We would like to thank the reviewer for very detailed feedback. We have updated the readme.
> 1. 1. Clarification on sparse concept experiment.
To provide clarity, when we mention "10%", it signifies that merely 10% of randomly chosen concepts were employed across all the images. Hence the same group of 10% of training concepts was given for all samples of the training set. This setup is designed to replicate scenarios involving sparse concept labels. This allows us to investigate the model's performance under conditions where concept annotations are limited, reflecting real-world scenarios of constrained concept availability.
>1.2 Clarification on CBM-AR setup
From Figure 3, Left image on CUB dataset, the red solid line Hard AR w/ side-channel is our baseline CBM-AR as the reviewer also alluded to. We shall make the setting of Hard AR more explicit in the camera-ready version.
>1.3 Intuition behind Coop-CBM having such high accuracy in spare concept setting.
* We would like to clarify that the task accuracy specified in the tables of the appendix and the main paper is g(f(x)) where the label prediction is bottlenecked on concepts only. We would like to build intuition using Figure 1 from the rebuttal pdf. Coop-CBM outperforms Joint-CBM due to its inherent design that introduces a "collaboration" between concept and supplemental task prediction at the same hierarchical level. In Coop-CBM, a task predictor stream is included alongside the concept predictor, enabling a synergistic relationship between the two. The concept features learned in Coop-CBM capture essential characteristics of the input data even when concept annotations are sparse. Through collaborative learning, Coop-CBM's concept predictor stream identifies and extracts meaningful relationships between the input features, labels and concepts.
* In contrast, Joint-CBM despite being trained jointly follows a sequential approach where concepts are first predicted by the concept predictor stream and then utilized by the task predictor stream. This sequential approach can limit the capacity of the task predictor to capture subtle relationships in the data. Coop-CBM, by directly integrating the task predictor at the same hierarchical level, mitigates this limitation by combining both sets of information at an earlier stage, enabling better exploitation of the data.
* We have plotted a graph for interventions for CUB in the rebuttal pdf.
> How do the results reported in some of the experiments change if one selects the hyperparameters for competing methods [...]
We compare the classification accuracy for CUB and TIL dataset across all of the baselines and hyperparameters mentioned. Due to lack of space, we weren't able to add generalization results. We observe that CEM and coop-CBM are fairly stable to diff hyperparams.
>Could you please clarify the intervention concerns raised in the weaknesses? That is, how are interventions evaluated in Figure 1 and[...]
We would like to clarify the reviewer's concerns by describing Figure 1 from the main paper.
* L->R-The first two graphs show the importance of using human uncertainty(SCS) as a metric to select interventions as compared to concept uncertainty(CUS) and weightage(CWS). We show this in two ways:
* First we plot for accuracy of the CBM model (as used by other baseline metrics [8,16]) when the annotator correctly intervenes on the concept. Intuitively, the accuracy should increase.
* Second, consider that scenario is similar to the first except the annotator has changed. The annotator with ill intentions changes the concept values to incorrect concepts (instead of GT concepts). Now we plot all of the metrics and intuitively the accuracy would reduce.
* L->R Last two graphs of Figure 1 compare the random interventions of different baseline models. Since SCS was present for CUB only, we consider random interventions for TIL and CUB. This also aligns with the evaluation method used in baseline (CBM-AR, CEM) papers. We would also like to point out that this is a standard graph, also present in the baseline works. Hence we are not using any specific metric for coop-CBM, but we would like to emphasize that the concept representation of our model is enhanced due to COL. This enhancement is most evident in CUB.
>In section 5.1, why does 30% of the test set still have a spurious correlation? What is the rationale behind not removing the spurious correlation in its entirety [...]
5.1 Experiment was inspired by the invariant learning/shortcut learning literature [1]. Here we expose the model to an extremely strong correlation, so a model purely minimizing training error will tend to exploit the color. But models will fail at test time because the background color setting is changed. For a naive model, if we change the spurious correlation to 0%, we will not be able to evaluate the performance well since a naive model will fail completely (would be random). Hence the literature changes the direction of correlation$\approx$1-strength during test time (usually between 20%-30%)
>What the main takeaways of the experiments in Section 5.3 [...]
5.1 and 5.2 experiments modify the input image space while 5.3 changes the concept space. In 5.3, we introduce direct spurious correlation between the image and concepts (as compared to the image-label correlation earlier).
>Compute and error bars.
We have included compute in the rebuttal pdf. We shall include error bars from the missing tables in the camera-ready version.
coop-CBM $\approx$ cooperative-CBM. We will clarify this in camera ready.
We thank the reviewer again for the insightful comments. We hope we covered all concerns. We are happy to answer more questions during the discussion period.
[1] Arjovsky, Martin, et al. "Invariant risk minimization." arXiv preprint arXiv:1907.02893 (2019).
---
Rebuttal Comment 1.1:
Comment: Thank you so much for your thoughtful rebuttal. I have gone over it, and I appreciate the clarification of some of the doubts I had regarding missing details in the manuscript. Below I list a few leftover questions (marked as **Q**s) from your reply above:
> when we mention "10%", it signifies that merely 10% of randomly chosen concepts were employed across all the images
Thanks for clarifying this! To avoid future confusion, I would suggest being explicit about this if possible in the updated paper.
> the red solid line Hard AR w/ side-channel is our baseline
Same as above, thanks for clarifying this! This is a non-trivial detail missing from the paper so I appreciate the promise to include it in the updated version.
> The concept features learned in Coop-CBM capture essential characteristics of the input data even when concept annotations are sparse....
I can somewhat see this, but I would appreciate it if further evidence was presented in favour of this hypothesis as it is hard to quantify a lot of the items in this argument. For example, the concept predictor in a Joint-CBM still receives feedback from the task labels even if this feedback comes implicitly via backpropagation, so I find it very surprising that the difference between joint CBMs and Coop-CBMs is as significant as the one shown in this paper. Not that this means that your presented intuition/argument is wrong in any way, but it is hard to be fully convinced by it without more evidence.
For example, something I would consider to be evidence **against the argument offered** in the rebuttal is the following: In Figure 2 of the rebuttal pdf, it is shown that even when all concepts are being intervened on in both Coop-CBMs and Joint-CBMs, the accuracy of Coop-CBMs is much higher. If the true power of Coop-CBMs lies in it learning a better concept predictor due to its introduction of direct task feedback at the same hierarchical level, then this power should be almost entirely removed when one intervenes on **the entire bottleneck** as all activations generated by the concept encoder are overwritten by their ground-truth intervention values at this point. Therefore, it is very surprising and unexpected that even in this situation one sees Coop-CBMs being significantly more accurate than Joint-CBMs as their label predictors are operating on pretty much exactly the same bottleneck (i.e., there is no chance of extra information leaking from the concept encoder to the label predictor).
**(Q)** Am I misunderstanding something? If so, I would very much appreciate it if the authors could clarify this. If not, then I would argue that there may be something in there that is not well-understood and that would benefit from further study.
> We observe that CEM and coop-CBM are fairly stable to diff hyperparams.
Thanks for including this! Could you please point me towards the table/figure and/or analysis for this ablation? I am sorry if I miss it in my previous reads. I found Table 7 in the appendix before, but this table is not mentioned anywhere in the appendix, or the main paper and I wasn't sure of the main conclusions from that table.
**(Q)** Regarding my comment on this point in the weaknesses above, I would appreciate it if the authors could clarify why setting the concept weight across all methods to be the same is fair and whether this could potentially affect the results observed in this paper.
> We would like to clarify the reviewer's concerns by describing Figure 1 from the main paper.
Thank you for this. There may be a misunderstanding, but my concern was regarding some of the missing details on the evaluation rather than on the validity of the study performed. I agree that the way these experiments were run is standard and follow what has been done elsewhere in the literature. And I believe that if certain details are included in the main body of the manuscript, such as those raised in points 1.2 and 1.3 of the rebuttal, then this would strengthen the intervention-related sections.
**(Q)** On Figure 1, on a third glance, I noticed that CEM's intervention accuracy in the last plot of Figure 1 for CUB is not as high as that reported in its original paper. Do you have an intuition as to why this is the case? I understand your CUB setup is slightly different as you have more concepts than that in the CEM and CBM papers. However, I am wondering whether the difference observed could be related to the lack of correct fine-tuning (e.g., $\alpha$) for baselines discussed elsewhere in my review. If so, then this may be a bit unfair for competing methods. Any further clarifications on this point would be much appreciated.
> Hence the literature changes the direction of correlation 1-strength during test time (usually between 20%-30%)
This makes sense; thank you so much for clarifying.
> We have included compute in the rebuttal pdf... error bars...
Thank you, I appreciate this.
---
Reply to Comment 1.1.1:
Comment: We would like to thank the reviewer for their thoughtful questions. We are happy to hear we were able to alleviate some of the concerns and questions by the reviewer. Please find answers to your further questions below:
>Joint CBM vs Coop-CBM intervention
Intuitively, this is likely due to Coop-CBM+COL minimizing the concept loss better (with help from the auxiliary loss which aids representation learning), which results in clearer separation of logits. Note, that the standard intervention protocol for CBMs replaces the concept predictions with the 5-percentile (for ground truth 0s) or 95-percentile (for ground truth 1s) of training logits. We verified via histograms that the logits of Coop-CBMs have a clearer separation with strong peaks at the beginning and end of the interval, without a wider cluster in between. Joint-CBMs on the other hand produce logits that have significantly more values in between, which in turn also moves the 5- and 95-percentile values closer together than for coop-CBMs. We will add this analysis and the corresponding logit-histograms to the paper. Please find below the 5- and 95-percentiles on the training set for both joint- and coop-CBMs:
| Model | avg 5th percentile | avg 95th percentile |
|--------------|----------------|-----------------|
| Joint CBM | 0.25 | 0.83 |
| Coop-CBM+COL | 0.03 | 0.92 |
We believe this constitutes strong evidence in favor of this interpretation of the results.
> Concept weights
We have included the table of suggested concept weight parameter experiments (from the first review) in the global response, Table 6.
As mentioned in the original paper, we set the concept weightage as 0.01. This hyperparameter is fixed across all of our experiments as the concept weightage. The initial intuition to select a number in this range came from Figure 2 of the CBM paper. Also, this is the value for which the results are reported for the CUB dataset in the original paper. Further, we performed some ablation studies regarding the same as noted by the reviewer in the appendix.
> CEM results
* We would like to tie this to the previous two answers as well. The CEM paper selects concept weightage for CUB = 5 (Pg 19, A.6) which is the hyperparameter that is potentially the most divergent from our selected value. We tried our best to find the intuition behind the selection of $\alpha$ value or any related ablation study but could not find it in the CEM paper. This value is particularly interesting since they have used the same $\alpha$ value across their baselines, counterintuitive to the values used in the original CBM paper. Hence following their setup for comparison as well, we set the same concept weightage value (0.01) across the baseline.
* As the reviewer also suggested that we have a slightly different setup in terms of a number of concepts. We would also like to note that our backbone is different, we use Inception V3 as the backbone across models.
* Due to constrained time and resources, we conducted a small grid search for hyperparameters for CEM (0.01, 0.1, 1.0, 10.0) on CUB and we observed 0.01 gives the best task accuracy without much compromise in concept accuracy.
* Finally, $\alpha$ hyperparameter particularly controls the behavior of a model under interventions which is more evident in CUB. In the adjacent graph for TIL however, we do not observe such a big disparity.
* Similar to CBM, CEM, we use the same value across all of the experiments. Indeed changing a hyperparameter would change the results across any baseline but to the best of our knowledge, we have followed the regime of our baseline CBM and CEM papers. Additionally, we have performed ablation for the baseline methods to select the best operating point and all of them seem to converge to 0.01. We shall include these additional ablation results in the final version.
We hope that the above points answered your question and concerns. We would be happy to answer any further concerns. Additionally, if we have answered your question, we would be grateful if you can kindly increase the score.
Title: Thank you for your response | Summary: The authors introduce an auxiliary task in CBM training to incorporate more information regarding the downstream task in the input representation for predicting concepts. They also introduce Concept Orthogonal Loss (COL) to enforce orthogonalization between input representations of different concepts, and vice versa. Experiments show superiority of their method over existing CBM baselines on the performance of the downstream task, for standard training settings as well as settings with manually-injected distribution shifts.
Strengths: * Simple and straightforward presentation of the proposed methods. Very easy to read.
* Proposed method makes intuitive sense.
* Solid experiments.
* Experiments on extensive settings to demonstrate the superiority of CBMs over standard models w/o concepts.
Weaknesses: * Risk of information leakage: Mahinpei et al. have shown that CBMs may leak information of the target task through the concepts, also mentioned in the paper (P2L50). Whether the performance gain of coop-CBM comes from information leakage is not exactly clear.
* Assumption of all concepts should be orthogonalized: The proposed COL implicitly assumes that all concepts should be orthogonal. The information gain shown in the experiments that this assumption holds true in the tested settings. In general, user selected concepts may not be entirely orthogonal. This method may be limiting when a user does not know whether adopting COL is appropriate given the chosen concepts.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: I am willing to increase the score and accept the paper as I genuinely believe this work is a good solid step in terms of making concept-based models more viable if the following issues are resolved properly.
### Risk of information leakage
My main concern is that coop-CBM improves downstream task performance because downstream task information is leaked through the concepts. This makes intuitive sense as the explanation of performance gain is the representation used to predict concepts contain more downstream task information. Actually I am not sure where the author stands on the topic of leaking information through the concepts. Does the authors believe that benign leakage of information is acceptable?
Perhaps experiments could be repeated by clipping the concept predicted values in coop-CBM to hard labels (as suggested in Mahinpei et al's paper) to reduce the leakage and see if there is still performance gain (or maybe this is already the default behavior indicated somewhere in the work that I missed).
The risk of information leakage is that then the concept scores no longer purely represents the concept, but rather encodes the downstream task label in some implicit manner. Then the whole concept bottleneck idea breaks down. The bottleneck is no longer a bottleneck and explainability through concepts is no longer valid.
### Assumption of all concepts should be orthogonalized
The authors extended the idea of concept whitening to CBMs through COL. However, orthogonalizing every concept may not be always beneficial (e.g. related concepts). Are the authors claiming orthogonalizing concepts is always beneficial? Is it possible to characterize when to adopt COL? Perhaps it is impossible to make definite statements about when COL would work but is it possible to at least provide a good heuristic that serves as a good-enough guideline?
For example, concepts in CUB are grouped into attributes of different parts of birds (e.g. body, head, wing). Does doing COL on inter-grouped concepts make sense? Perhaps something could be done with prior knowledge of which concept correlates more with others? Are there easy metrics that could serve as a proxies for concept correlation?
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > My main concern is that coop-CBM improves downstream task performance because downstream task information is leaked through the concepts. This makes intuitive sense as the explanation of performance gain is the representation used to predict concepts contain more downstream task information[....]
* We agree with the reviewer that information leakage is an important concern for concept-based models. As pointed out by the reviewer, and multiple times in our main paper, we have considered the impact of information leakage. This leakage occurs when the concepts that are supposed to be independent and interpretable become influenced by each other due to shared task-related information.
* To overcome this issue, COL is designed to encourage orthogonality among the learned concept representations. Mathematically, COL adds an orthogonal regularization term to the loss function used during the training of concept classifiers. This term encourages the learned concept representations to be orthogonal to each other, thereby reducing any inadvertent inter-concept correlations.
* One way to evaluate concept leakage in literature has been through concept accuracy. COL enhances concept independence for concept-based models. In Table 2 (main paper), we evaluate the concept accuracy of each baseline model on the CUB dataset. Here, as one can expect the “hard” CBM has the highest concept accuracy in the absence of COL. But with the introduction of COL, the concept representation for each baseline was learned to reduce the information leakage. Coop-CBM+COL achieves comparable concept accuracy to Independent (hard) CBM with higher mean and comparable uncertainty.
* This result aligns with the fundamental goals of explainability and interpretability, which are pivotal in ensuring that the concept-based model's representations remain meaningful and free from unintended correlations, ultimately enhancing the model's utility and reliability in various applications.
>Perhaps experiments could be repeated by clipping the concept predicted values in coop-CBM to hard labels (as suggested in Mahinpei et al's paper) to reduce the leakage and see if there is still performance gain (or maybe this is already the default behavior indicated somewhere in the work that I missed).
Thank you for suggesting the experiment. Following your suggestion, we performed two sets of experiments on CUB and TIL datasets using coop-CBM.
* First, we trained the model by clipping the predicted concept values to “hard” labels (Table 3, Exp1 in rebuttal pdf).
* Second, we trained the model as we have described earlier in the paper (using soft labels) and the model evaluates on the test set by clipping to “hard” labels (Table 3, Exp2 in rebuttal pdf).
From the above experiments, we conclude that the model is able to learn a good representation of the concepts without necessarily leaking information.
>Are the authors claiming orthogonalizing concepts is always beneficial? Is it possible to characterize when to adopt COL? Perhaps it is impossible to make definite statements about when COL would work but is it possible to at least provide a good heuristic that serves as a good-enough guideline?
We would like to connect this answer with the previous answer. The motivation for COL arose to mitigate information leakage in coop-CBM and is applicable to other concept-based models too. The rationale for COL revolves around its ability to counteract the inadvertent propagation of task-specific information across concept representations. By imposing this constraint, COL effectively disentangles the influence of different concepts, fostering their independence and reducing the risk of cross-concept information leakage interpretability, and the effect of interventions. Also, please see below for detailed answers.
>For example, concepts in CUB are grouped into attributes of different parts of birds (e.g. body, head, wing). Does doing COL on inter-grouped concepts make sense? Perhaps something could be done with prior knowledge of which concept correlates more with others? Are there easy metrics that could serve as a proxies for concept correlation?
* Devising an optimal point for when to use a loss function could be great future work building on our work. Also, prior knowledge about concept groups cannot be assumed. One way to characterize the correlation could be through using mutual information, which can be an interesting followup work.
* We would like to emphasize that COL can be used to improve the concept representation of any concept-based model. For instance, considering the case of Joint-CBMs, which has been known to exhibit information leakage, the incorporation of COL can effectively increase separation in concept learning leading to enhanced interpretability while mitigating the risk of information leakage (See Table 2 main paper).
* In order to investigate the effectiveness of COL we propose a comprehensive experiment that deliberately explores an extreme case for evaluation in line with the reviewer's proposal. We aim to showcase the robustness of COL across different scenarios, extending the OOD experiments in the main paper.
Experiment- We consider a scenario where input concepts are intentionally duplicated to create a high degree of concept correlation. In our experiment from Table 4, we duplicated 10%, 25%, 50% and 100% of concepts and added them to the original concept bank. This is a worst-case representation of “similar concepts”. From the table, we see that the duplication of concepts does not impact the concept or the task accuracy.
Additionally, this experiment contributes to the broader understanding of how COL performs in various scenarios.
We thank the reviewer again for the detailed questions and insightful additional experiment suggestions. We hope we addressed all concerns and are happy to answer and further questions during discussion period.
---
Rebuttal Comment 1.1:
Comment: Many thanks to the authors for the additional experiments. Those were quite insightful and definitely proved a couple of points.
Before examining the experiment results, based on the authors' response, it seems like the authors' consider concept correlation as a cause for information leakage and thus COL as a cure.
> This leakage occurs when the concepts that are supposed to be independent and interpretable become influenced by each other due to shared task-related information.
I don't believe that to be true. Consider the following toy scenarios for illustration:
1. The target prediction $y$ is one of the concepts $c_i$. Even if we orthogonalize the concepts mutually, the information of $y$ is still leaked through $c_i$. → Concept mutual orthogonalization does not imply no leakage of target information.
2. The concepts and target are completely independent. Let us duplicate concept $c_i$ to be $c_i'$ and add that to the concept set. Then the concepts are not mutually orthogonalize but no information leakage is happening. → No leakage of target information does not imply concept mutual orthogonalization.
Following the arguments, concept mutual orthogonalization and target information leakage are two completely different conditions and there are no direct mutual implications. Please correct me if the authors' believe this to be untrue.
In that case, COL is not a cure for information leakage and concept disentanglement is not a proxy for measuring information leakage. Instead, information leakage should be measured with mutual information between the concepts and target label directly.
Now moving on to the experiments, Table 3 showed the experiments with predicted concepts clipped to hard labels and we do observe a performance drop compared to the non-clipped version. Particularly for CUB, the performance of Coop-CBM dropped to lower than CEM. This suggests that if we apply some preventative measures for information leakage, Coop-CBM would perform similarly to the non-coop version. The direct implication is the performance of Coop-CBM may come from information leakage. It would be a stronger argument if the authors also present the results for AwA2.
For Table 4, the authors' conducted a very interesting experiment on how duplicating a concept (the worse case scenario for correlated concept) affects the performance of Coop-CBM. I must applaud the authors' for putting their method under such scrutiny, experimenting in such extreme scenarios. As the authors have stated, the performance of Coop-CBM maintains at a similar level despite how much duplication is happening. At first glance, this seems to imply the robustness of the method. But if we think about the implications, these duplicated concepts are completely correlated and then de-correlated with COL. How is it possible for completely dependent concepts be de-correlated? Well, correlation is linear so as long as the latent dimension is large enough, we could find enough orthogonal directions to represent the same concept. This suggests that even if we apply COL and the disentangling loss suggest the concepts are disentangled, the concepts may still be dependent. In this case, completely dependent (duplication). This is quite alarming. Correlation is not independence. De-correlating does not necessarily remove the dependence between concepts (in the probabilistic sense).
---
Reply to Comment 1.1.1:
Title: Response to the reviewer's detailed feedback
Comment: We thank the reviewer for their detailed feedback. Please find our comments below.
Initially, we’d like to point out that we don’t claim that COL leads to 100% leak-proof model but encourages disentanglement between concept predictions. This results in more accurate concept predictions, suggesting more trustworthy explanations. Seminal work on leakage [8] (CBM-AR baseline) uses concept accuracy to evaluate leakage in CBMs which does not achieve full accuracy on concepts either. Also, we believe that limiting the discussion to the two extreme cases brought up by the reviewer obscures important nuances.
>Regarding 1)
For $y=c_i$ and information only passing through $c_i$, we disagree with the notion that this should be termed leakage. In this case, $y$ could be explained in terms of concept $c_i$, as intended.
>Regarding 2) and the topic of co-occuring concepts
COL does not yield perfectly orthogonal representations, as the reviewer suggests, but encourages a degree of disentanglement between concepts. In the discussion with other reviewers, we also presented an interpretation with supporting evidence that this might be due to learning of an overcomplete sparse representation. In particular, COL does not achieve perfect disentanglement between co-occurring concepts. However, given an overcomplete representation capturing different combinations of a concept $c_i$ with others, the concept prediction layer can introduce invariance of the concept prediction wrt specific combinations by learning weighted sums over multiple differently entangled features. This improves concept accuracy, enabling task predictions based on concepts. For details please refer to reviewers-VWed, K2hb answers. Specifically, we analyzed learned representation and concept predictions using histograms. In the analysis for reviewer VWed, we show that COL induces a very sparse representation. In the analysis for reviewer K2hb, we show for coop-CBM+COL that resulting concept predictions are more separated with clear peaks at 0 and 1.
>Regarding Table 4
The intermediate classification loss (coop-) aids representation learning, while COL encourages disentanglement (only between non-identical concepts) in the representation. As described above, this enables the introduction of invariance of individual concept predictions wrt other concepts. This improves concept accuracy and the downstream model can rely on these more accurate concepts, instead of leakage as seen in Table 4, which also explains why coop+COL enables more effective interventions (see the discussion with reviewer K2hb).
>Regarding mutual information
We would like to point out that we already mentioned that MI could be investigated in Appendix H. Experiments from [35] may suggest that there could be potential a correlation between concept accuracy and the MI plane (needs exploration!). We use concept accuracy, a widely accepted metric for leakage, as used in works that address leakage [8,16]. [8] uses MI to evaluate the completeness of side-channel different from leakage. The connection between independence of concept predictions and leakage is discussed on Page 2 of [8] which motivated our work on COL.
>Regarding Table 3
Experiments on AwA2 had not finished earlier. Please find the results below.
| | Std | Exp1 | Exp2 |
|----------|-------------|-------------|-------------|
| Coop-CBM | 96.6+-0.1 | 96.3+-0.2 | 96.5+-0.1 |
| +COL | 97.0+-0.1 | 96.7+-0.2 | 97.0+-0.2 |
In Table 3, we conduct two experiments, where the performance of Coop-CBM (without COL) is slightly lower than that of CEM (NB comparing standard training of CEM not clipped) only on the CUB dataset for Exp1. Despite that, and as mentioned in our paper, COL improves concept and label representation. Hence, using clipping as a preventive measure for leakage along with COL, our model significantly outperforms the baselines including CEM. Further, we now highlight the difference in performances between Exp1 and Exp2. In Exp1, we round soft concept probabilities to 0/1 during training, while in Exp2, we round concept values to 0/1 during test-time. The bigger drop in performance in Exp1 could also be explained by non-smooth changes in the input distribution of $g$ during training when concept predictions cross the boundary of 0.5. Regardless, we would like to point out that coop-CBM performs on par with standard black-box models, while also providing explanations for CUB and surpassing the performance of baselines incl. black-box models on TIL and AwA2. And with the addition of COL, we surpass all baselines on all considered datasets, despite constraining the model to hard concepts both during training and evaluation.
We hope that our answers and comparison (suggested by the reviewer) would be able to alleviate the reviewer’s concerns. Additionally, if we have answered all of your questions, we would appreciate it if the reviewer would consider accepting the paper as mentioned. | Summary: The author proposes coop-CBMs + concept orthogonal loss (COL) to solve previous limitations in Concept Based Models with a focus to learn relevant concept representations that consecutively boost model performance.
Strengths:
In general, the idea of coop-CBMs+COL is simple and effective. The method increases prediction accuracy on benchmark datasets and seems to work well on noisy inputs.
Weaknesses: * The overall structure of the paper is reasonable, but part of substructure in some sections are not well organized. In introduction section, the author first introduce definition of previous Concept bottleneck models, then pointing out its limitation.Then followed with the importance of studying the robustness of CBMs in various domains. After that, returns back to bottleneck of CBMs. Unless the author aims to describing totally different limitations of previous CBMs, it feels unnecessary.
* In Section3, the author does not specify what does $\hat{\theta}$, $\hat{\Phi}$ stands for, it would be better to insert formula into the words such as a formula followed by it notations. Otherwise this description looks messy.
* The author does not show experimental results for using CBM alone on classification Coop-CBM.
* It's confusing calling Coop-CBM both a component of the final strategy and the whole strategy incorporating it.There is a bit confusing that what is your final method. Coop-CBM or Coop-CBM + COL ?
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. modify the structure/content arrangements in introduction section.
2. do more literature review on section2 on the relationship between Concept-Based Model and explainability if applicable.
3. please clarify the notations in method part.
4. please clarify what is your final method, Coop-CBM or Coop-CBM + COL ?
5. add classification results for only using Coop-CBM on CUB, AwA2 and TIL datasets.
6. better to tell the significance of the proposed method, not just limited to some improvements in the benchmark dataset.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: No.
it would better to discuss the limitations of proposed methods ( Coop-CBM along with COL) in `Discussion and Conclusion` section.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We would like to thank the reviewer for their feedback.
> The overall structure of the paper is reasonable, but part of substructure in some sections are not well organized.
Thank you for your feedback. We shall rewrite that section by moving the two paragraphs about distributional shifts and limitations of CBMs around. We will update the camera ready version accordingly to improve readability.
> the author does not specify what does $\hat{\theta}$, $\hat{\phi}$ mean.
$\hat{\theta}$, $\hat{\phi}$ are the parameters of the encoder and $g$ correspondingly (please see Figure 1 from rebuttal PDF).
> The author does not show experimental results for using CBM alone on classification Coop-CBM.
In our initial submission, we have already conducted comprehensive evaluations of CBM, coop-CBM, and the inclusion of COL, (please refer to Tables 1, 3, 4, and 5 in the main paper, along with Tables 1, 2, 3, 4, and 5 in the appendix). Furthermore, we have conducted an additional analysis in Table 2 by isolating the effect of COL within different baseline models. This analysis demonstrates that regardless of the chosen concept model, the incorporation of COL consistently enhances concept accuracy.
> It's confusing calling Coop-CBM both a component of the final strategy and the whole strategy incorporating it.There is a bit confusing that what is your final method. Coop-CBM or Coop-CBM + COL ?
As per the title of the paper, we are suggesting losses that can be used to improve the generalization of concept-based models. We will add a sentence in the introduction to emphasize this. Our contribution in terms of the model is two fold: first to improve task accuracy for concept-based models via Coop-CBM, and second to improve the concept accuracy for concept-based models via COL. We empirically show that each of the strategies alone works but definitely the most effective strategy is to use Coop-CBM + COL. From Table 1,3,4,5 penultimate line, we observe that Coop CBM alone improves task accuracy but with COL, we can further enhance the task accuracy. As mentioned in our paper, information leakage is detrimental to concept accuracy and hence the explainability of the model. We thus add the COL loss to increase the independence of learning each concept from the other as observed in Table 2 of the main paper for CUB.
> do more literature review on section2 on the relationship between Concept-Based Model and explainability if applicable.
We did our best to include all of the recent related references of Concept-Based models that were released until the Neurips deadline in the related works section of the main paper. We will add all new references mentioned by other reviewers. Please let us know if you think we are missing a particular reference and we will include it in camera ready. In the Appendix, section D, of the original submission we have included a further literature review on explainability.
> please clarify what is your final method, Coop-CBM or Coop-CBM + COL ?
Our contribution encompasses the introduction of both coop-CBM and COL. Both these methods, individually and when combined, exhibit improvements in both explainability, accuracy and test-time interventions.
> add classification results for only using Coop-CBM on CUB, AwA2 and TIL datasets.
The main paper has classification results for using coop-CBM on CUB, AwA2 and TIL datasets in Table 1. In addition, we have results for coop-CBM only on all of our other tables as well in the original paper. The penultimate row of each of the tables mentioned that says “Coop-CBM” signifies classification results using only Coop-CBM. The “+COL” row signifies the classification result using both Coop-CBM+COL. We will add a sentence to each table caption to highlight this.
> better to tell the significance of the proposed method, not just limited to some improvements in the benchmark dataset.
This work introduces two significant contributions to concept-based models. Firstly, we propose a multi-task model that predicts an intermediary task label alongside concept predictions. This is particularly useful when dense and relevant concept annotation is lacking, as seen in the TIL dataset. Secondly, we incorporate orthogonality constraints in the concept representation space through concept orthogonal loss during training. This loss enhances inter-concept separation while reducing intra-concept distance. Through extensive experiments across diverse datasets and distributional shifts, we observe that the bottleneck layer preceding the final prediction enhances the robustness of concept-based models against spurious background correlations. Coop-CBM combined with COL demonstrates leading performance in both task and concept accuracy.
Additionally, we would like to point out that most of the current CBM works do not perform extensive experiments on distributional shifts. Our paper is the first work that attempts to shed light on the impact of using CBM on multiple different distributional shifts. We have attempted to justify and motivate our work both empirically and intuitively. In Section 3, we introduced the intuition behind our contributions and then throughout the paper we provide analytical experiments which highlight each contribution’s effectiveness.
Please point us to any specific issues and we will address them during the discussion period.
> it would better to discuss the limitations of proposed methods ( Coop-CBM along with COL) in Discussion and Conclusion section.
We have included limitations in Appendix B. Following your suggestion we will move this discussion to the Conclusion for camera-ready version.
We have attempted to answer all of the concerns of the reviewer and would be happy to answer any further questions. We thank the reviewer again for their feedback.
---
Rebuttal Comment 1.1:
Title: Acknowledgement of rebuttal
Comment: Thank you for the detailed response and the additional clarification. The rearrangements in details did make this paper with stronger evidence. Also, the clarification make writing more clearer and easier to understand your proposed Coop-CBM.
Another minor concern is that Do you use the same training strategies for datasets in table 2 and table 3 (I mentioned in-domain dataset here)? If so, what's the difference between accuracy result from in-distribution results in table3 and the one from table2 ?
---
Reply to Comment 1.1.1:
Title: Thank you for your response
Comment: We are grateful for the feedback provided by the reviewer that strengthened our paper, and we are glad that our reply clarified most of the reviewer’s doubts. With regards to training arrangements in Table 2 and Table 3, please find our answer below:
* Using Table 2, we observed that our proposed loss COL can be applied to any baseline method to improve their concept accuracy. We would like to point out that Table 2 is not part of the OOD experiment as mentioned in the corresponding text.
* In Table 1, we evaluated the task accuracy (classification label prediction) of the baselines and our proposed model on the test splits provided by the datasets. Similarly, for Table 3 we are evaluating the task accuracy. In Table 3, as the reviewer alluded to and mentioned in the text, we create synthetic datasets to evaluate the performance of models in the presence of spurious correlation.
* In the synthetic datasets, we spuriously correlated the background with the label for the CUB dataset and hair colour with gender for the Celeb A dataset. Let us take the CUB dataset for example, where we segment the foreground of each image and add a coloured background. We generate 200 background colours and correlate each with one of the 200 class labels. Now an image from the in-domain test set follows the same distribution as the training image i.e. spurious correlation present, but in the out-domain test set, the strength of this correlation is reduced, strongly impacting the accuracy. The strength of the correlation is 80% for in-domain train and test and 30% for out-domain test. We have included an example for in-domain train, in-domain test and out-domain test in our supplementary material in Figure 4.
* This synthetic dataset, constructed specifically for contrasting in- and out-domain performance, is different from the original CUB dataset, which is why the in-domain test performance also differs.
* The hyperparameters for each model are kept fixed across experiments in Table 1, Table 2 and Table 3.
We hope that our answer appropriately addressed the question and concerns. We would be happy to answer any further concerns. Additionally, if we have answered your questions, we would be grateful if you would consider increasing the score. | Rebuttal 1:
Rebuttal: Global rebuttal.
Pdf: /pdf/a021ddf7098ef265989c2426033a4a29f8a84247.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: This paper proposes auxiliary losses to learn concept-bottleneck models: a coop task loss, and a COL (orthogonal concept loss). The experiments show that the proposed coop-cbm can perform 1-3% better than standard black-box models and outperform other CBM models on three datasets: CUB, AwA2 and TIL.
==================
post-rebuttal: the authors have addressed my concerns and questions in the rebuttal. I would like to increase the rating from 4 to 6.
Strengths: 1. This topic is important
2. The paper compared with 3 fairly new CBM baselines (CBM [12], CEM [35], CBM-AR [8]), but stronger baselines are not compared [33, 18]. The authors mentioned it's an unfair comparison to [33] in Line 228-229, but it's still not clear why, and why [18] is not compared either.
Weaknesses: 1. The proposed idea is interesting, though the two losses (COOP loss Eq 3, and COL in Eq 6) seem to be proposed in prior works? The authors mentioned that COL is from [21], and it feels like the COOP loss in Eq 3 is similar to the non-interpretable "residual" terms in Post-hoc CBM [33]. So the main contribution seems to be integrating them together? Please clarify and compare if there's some misunderstanding here.
2. The description of some related work seems not accurate: Line 74-76: what does it mean that "suffer from pretrained model's biases"? Post-hoc CBM [33] with residual terms can improve the downstream task accuracy. It looks like COOP can perform better is due to the residual loss + join training (since post-hoc CBM only training the last 2 layers)? Besides, the result in Label-free CBM [18] show that it is comparable to original downstream task performance without the residual terms. Please compare with [33] and [18] with ablation study.
3. Large scale datasets results are missing (ImageNet)
4. The exposition of the technical part can be improved (sec 3). For example, to describe f, g, h, it'll be useful to visualize the structure. phi and theta are not defined. Also, since f and h actually differ in one linear layer, why both of them still use theta? it seems phi is the parameter of g.
**minor**
- an extra s in Line 277 "modelss"
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: 1. It's not clear why the concepts are necessary need to be orthogonal. Could the authors explain?
2. In eq 5, how do you define intra-concepts? It looks like you are using c_i= c_j, which means the concepts are exactly the same? Please elaborate the details.
3. Line 228-229, why is it unfair to compare with post-hoc CBM [33]? Do you mean that it is expected that [33] with joint training can outperform coop-cbm? please clarify
4. Please report computation cost for baselines and the proposed method in Table 1.
5. How is Fig 1 calculated? Is it correcting the mis-classified images in the test time through intervention? Please give more details on how Fig 1 is plotted.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 2 fair
Limitations: I did not see a dedicated paragraph to discuss the limitation of the proposed work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We would like to thank the reviewer for their feedback.
>The authors mentioned it's an unfair comparison to [33] and why [18] is not compared either.
[18] and [33] used a pre-trained model - CLIP which was trained on a massive corpus of data to obtain concepts. This can potentially introduce inherent biases from pretraining into the concepts. This was also brought up in the Limitations and Conclusion section of [33]. Furthermore, the dissimilarity in the concepts employed in these works adds complexity to establish a fair and meaningful comparison. Moreover, we wish to **emphasize** that neither of these works directly compare with CBM variants in their main paper, except for [33] which appears in Appendix C. Also it is not possible to compare with realistic medical datasets as CLIP fails to generate meaningful concepts. Regardless, we have compared [18,33] with our method on CUB+OOD datasets in Table 1 of rebuttal pdf, and our model outperforms the accuracy of [18, 33], which is lower than the standard model.
>The two losses (COOP loss Eq 3, and COL in Eq 6) seem to be proposed in prior works. The authors mentioned that COL is from [21].
Orthogonality and alternative loss functions have been active research areas. [21] focused on establishing orthogonality in the embedding space between features of distinct classes. Therefore, the loss function was added just before the classifier layer. However, the COL we propose aims to induce orthogonality among the different concepts predicted. This introduces orthogonality between the respective linear layers associated with each concept. As a result, our formulation considerably deviates from that of [21], as we're introducing orthogonality within a distinct space and across different dimensions.
>It feels like the COOP loss in Eq 3 [...] Do you mean that it is expected that [33] with joint training[..]
We cannot make claims on how PCBM-h[33] would perform when jointly trained but it should be noted that the residual term in [33] is non-interpretable which might defeat the purpose of concept-based explanations while less accurate than the standard model. Hence unlike [33], we can achieve higher task accuracy while maintaining high concept accuracy (Table 2, main paper).
>Large scale datasets results are missing (ImageNet).
We have validated our method on 3 diverse datasets: TIL is a realistic medical dataset, AwA2 is a large-scale 13GB dataset and we have performed multiple realistic evaluations on distributional shifts. Most works on CBMs do not evaluate on ImageNet apart from [18] whose concept acquisition methodology is different.
>For example, to describe f, g, h, it'll be useful to visualize the structure.
We have added the figure in the rebuttal PDF.
>Also, since f and h actually differ in one linear layer, why do both of them still use theta? it seems phi is the parameter of g.
$\theta$ is the encoder layer. It should be noted that f has multiple linear layers as well. Yes $\phi$ is the parameter of g. We shall make it clearer in the camera ready version.
>It's not clear why the concepts are necessary need to be orthogonal.
Briefly mentioned in Section 3, we introduce orthogonality to increase the separation between different concepts. Independent CBMs[12] which take binary (0/1) concept values as input into the concept predictor, exhibit the highest concept accuracy, suggesting enhanced interpretability. Simultaneously [8,16] indicated that information leakage via "soft" concept labels leads to low concept accuracy compromising explainability. COL introduces orthogonality constraints among different predicted concepts, promoting their independence and reducing the potential for information leakage between them. By encouraging orthogonality between concept representations, COL promotes learning of distinct, disentangled, and meaningful high-level concepts. COL leads to three significant applications: 1) enhances concept representation and accuracy (see Table 2 main paper on CUB), 2) improves task accuracy 3) helps to make more meaningful interventions i.e. with a lower intervention budget, one can achieve higher accuracy. Orthogonality in concept learning is crucial for models aiming to balance accuracy, explainability, and prevent information leakage.
>Computation cost
We have added this in Table 2 of the rebuttal PDF
>Please give more details on how Fig 1 is plotted.
* L->R-The first two graphs show the importance of using human uncertainty(SCS) as a metric to select interventions as compared to concept uncertainty(CUS) and weightage(CWS). We show this in two ways:
* First we plot for accuracy of the CBM model (as used by other baseline metrics [3,26]) when the annotator correctly intervenes on the concept. Intuitively, the accuracy should increase.
* Second, consider that scenario is similar to first except the annotator has changed. The annotator with ill intentions changes the concept values to incorrect concepts (instead of GT concepts). Now we plot all of the metrics and intuitively the accuracy would reduce.
* L->R Last two graphs of Figure 1 compare the random interventions of different baseline models. Since SCS was present for CUB only, we consider random interventions for TIL and CUB
>In eq 5, how do you define intra-concepts? c_i= c_j clarification.
In retrospect, we would like to rewrite as $d_1=\sum_{\substack{i,j\in B,\\\\c_i^a = c_j^a\\\\a\in A}}\frac{q_i^T q_j}{||q_i|| ||q_j||};d_2=\sum_{\substack{i,j\in B,\\\\ c_i^a\neq c_j^a \\\\a\in A}} \frac{q_i^T q_j}{||q_i||||q_j||}$ where A is total concepts. $c_i=c_j$ brings concept representation of the same label concept closer
>Limitations
In the original submission, it is part of Appendix B. We will move it to the main paper.
We would like to thank the reviewer again for the valuable feedback. We ran the requested experiments and attempted to answer all of the concerns of the reviewer and would be happy to answer any further questions.
---
Rebuttal Comment 1.1:
Title: Acknowledgement of rebuttal
Comment: Dear authors,
Thank you for the detailed response and the additional clarification + experiments. The additional experiment comparison in rebuttal Table 1 with [18, 33] on the CUB did strengthen this paper with stronger evidence. Also, the clarification fig 1 makes it much clearer to understand the proposed Coop-CBM.
I have no other concerns and will increase the rating to acceptance with rating 6. Please include the rebuttal results, including updated equation 5, Table 1-2, Fig 1 to the camera ready version to improve the quality of the paper.
---
Reply to Comment 1.1.1:
Title: Thank you for your response
Comment: We are grateful for the feedback provided by the reviewer, and we are glad that our reply addressed your concerns. Your feedback has strengthened our work. We will incorporate your feedback into the camera-ready version. We would be happy to address any further questions/suggestions that might come up until the end of the discussion period. | null | null | null | null | null | null |
Improving Language Plasticity via Pretraining with Active Forgetting | Accept (poster) | Summary: This paper proposes active forgetting, a rather straightforward method that resets the embedding layer every K updates during pretraining, to quickly adapt PLMs to new languages. Through experiments on different language pairs with RoBERTa, the authors claim that the proposed method can induce faster convergence and better performance when the languages are distant from English.
Strengths: - This paper is well-written in general, with a clear motivation and some novelty.
- It is appreciated that the author conduct experiments on many languages from similar languages like German to distant languages like Thai.
- The method itself is counter-intuitive, as it resets the embedding layer periodically which intuitively can be bad for PLMs, but effective to improve the cross-lingual transfer.
- The authors give in-depth analysis and insights into their proposed method, which I believe is interesting to the community.
Weaknesses: Some figures are inconsistent and visually not good. For example, the upper three subfigures in Figure 4 have a visually bad axis scale. Some values are rounded to integers while some are not, as in Figure 6. The authors should carefully redraw and prettify them in the camera-ready version.
Technical Quality: 4 excellent
Clarity: 3 good
Questions for Authors: - In the abstract, the paper claims that it is data and compute inefficient to learn a new embedding layer. But this paper does not address this problem, since a new embedding layer for new languages still has to be learned. Therefore I don't think it is necessary to mention that in the abstract.
- The paper claims the method ensures high sample efficiency. How does the experiment support this?
- Forgetting is “generally” a bad thing for PLMs, but this work counter-intuitively shows that active forgetting can be beneficial for cross-lingual transfer. The authors are encouraged to give more intuition or explore the reasons behind this phenomenon, in a cross-lingual perspective.
- For the experiment w.r.t. RQ3, the authors show that active forgetting is particularly helpful when the new languages are (typological) different from the pretraining language. However, I see also an important possible influencer: the script. The authors are encouraged to also explore if scripts can influence the transfer performance under active forgetting.
- The frequency of active forgetting is set to K=1000. As this is an important hyperparameter, I would encourage the authors to justify their choice.
- I would be very interested to see a plot of loss over updates during language adapting. I would expect the loss to go down in general but fluctuate a lot every time when active forgetting is used.
- It’s natural to think that this active forgetting could be used in other parts of the models instead of merely on embeddings. Therefore it would be very interesting to explore: active forgetting on which part of the models is the most effective?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 3 good
Contribution: 3 good
Limitations: The authors include a section of Limitations in the paper, where they mention that this paper only focuses on the simplest forgetting, which is only applied to the embeddings.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your valuable comments.
### We appreciate that you recognise the soundness and contribution of our work. We would like to address your comments as follows.
**Figures 4 and 6**: Thank you for pointing out the issues. We will polish the figures in the camera-ready version.
**Q1**:
> In the abstract, the paper claims that it is data and compute inefficient to learn a new embedding layer. But this paper does not address this problem, since a new embedding layer for new languages still has to be learned.
We would like to clarify the confusion here. Although the new embedding layer still has to be learned, forgetting PLMs require less compute and data in the new language to reach a good performance:
- 1) less compute. Fig 4 in our paper shows that forgetting PLMs converge fast within $5K$ adaptation steps. So we can actually stop the adaptation training earlier than the standard methods. For example, on XQUAD, in order to converge to $90\%$ of their final performances, forgetting PLMs take only ~$5K$ steps while standard PLMs take ~$20K$ steps. This leads to **4x** speedup for the language adaptation run.
- 2) less adaptation data. We did a few ablations on the adaptation data quantity and studied its impact on learning a new embedding layer. Fig 3 in our paper summarizes the results for learning a new English embedding layer. We can see that, when adaptation data is less than $10M$ (often true for low-resources languages), forgetting PLMs consistently outperform standard PLMs. Thus, to reach the same level of performance in low-resources setting, forgetting PLMs require less adaptation data while relearning the English embeddings (an extreme example would be in Fig 3 in our main paper, to reach 40 on NLI, standard PLMs require ~$100K$ adaptation data while forgetting PLMs only require ~$5K$).
**Q2**:
> The paper claims the method ensures high sample efficiency. How does the experiment support this?
The second point in our response to Q1 addresses this question. More evidence can be found by comparing the perfomance drop of standard PLMs and forgetting PLMs when the adaptation data change from a high-data setting [18,27] to a low-data setting that our work is considering:
| Method | Avg adaptation #tokens | Avg XNLI performance |
| :---------------------- | ---------------------: | -------------------: |
| Kelly et al 2022 [27] | 10.3B | 72 |
| Artetxe et al 2022 [18] | 569M | 66.7 |
| Standard | 5M | 53.3 |
| Forgetting | 5M | 62.7 |
We can see that, when the adaptation data is reduced from 569M to 5M, forgetting only drops about 6% (from 66.7 to 62.7) while standard drops about 20% (from 66.7 to 53.3). When the adaptation data amount drops, forgetting PLMs still retain a relatively good performance compared to standard PLMs.
**Q3**:
> The authors are encouraged to give more intuition or explore the reasons behind this phenomenon, in a cross-lingual perspective.
Our intuition is that the periodic forgetting of the token embedding layer will force the transformer body learn better high-level abstraction. Because every time forget happens, the body kind of "re-derive" these abstraction; by repeating this process again and again, the body finally learns to abstract instead of short-cut memorizing particular embedding values. During cross-lingual transfer, a body with more high-level abstraction can be more easily transferred to new languages, since the high-level abstraction is more language-agnostic.
Our intuition can also be supported by cognitive science literature (see sec 5.1 in our paper) where forgetting is shown beneficial for learning to abstract [27,29] and new languages[28].
We will add more discussion on this in the camera-ready version.
**Q4**:
> For the experiment w.r.t. RQ3, the authors show that active forgetting is particularly helpful when the new languages are (typological) different from the pretraining language. However, I see also an important possible influencer: the script.
This indeed a very valuable comment. We did observe some impact of the script. For example, although Vietnamese and Swahili are both distant from English, yet forgetting brings modest or no improvements. We guess it might be because they are written in the Latin script, which is the same as the pretaining English. We will add this discussion in the camera-ready version.
**Q5**: Frequency of Forgetting
Please see general response.
**Q6**: Loss Curves of Forgetting
Yes, the forgetting indeed creates a spike and then the model learn to recover to normal loss. Please see general response for details.
**Q7**:
> active forgetting on which part of the models is the most effective?
We are also excited about this direction. Our guess is that it will depend on the task. Token embeddings play a big role in cross-lingual transfer, so forgetting on token embeddings is effective. But for other tasks, forgetting on other parts of the model might be more effective. We will explore this in the future.
------
References 1-26 can be found in our responses to the other reviewers.
[27] Marchisio, Kelly et al. Mini-model adaptation: Efficiently extending pretrained models to new languages via aligned shallow training. ACL 2023 Findings
[28] Levy, Benjamin J et al. Inhibiting your native language: The role of retrieval-induced forgetting during second-language acquisition. Psychological Science, 2007
[29] Simon Nørby. Why forget? on the adaptive value of memory loss. Perspectives on Psychological Science, 2015
[30] Michael C. Anderson and Justin C. Hulbert. Active forgetting: Adaptation of memory by prefrontal control. 2022
---
Rebuttal Comment 1.1:
Comment: Thank you very much for your detailed response and all my questions have been answered now. | Summary: The paper proposed an active forgetting mechanism for PLMs pre-training for cross-lingual transfers and adaptations. The authors propose a multi-stage adaptation framework for better cross-lingual transfer/adaption: 1) First, by resetting embedding layers every K update during pre-training for monolingual RoBERTa model; 2) Second, by adapting parameters separately (embeddings vs backbone) for language-specific knowledge and task-specific knowledge (language+task adaptation). The paper shows the proposed mechanism helps faster convergence during cross-lingual transfers and adaptation as well as achieving better downstream task performances.
Strengths: * The paper shows an interesting empirical finding that resetting partial parameters (in this case embeddings) during pre-training in English leads to better transfers/adaptation in cross-lingual settings.
* The paper is clear-written and the proposed method is effective empirically for cross-lingual transfers.
Weaknesses: - Although the proposed method achieved good performances in downstream tasks, this work lacks proper ablation experiments on showing the gain is from resetting during pre-training, vs the gain of adapting/resetting parameters separately during the language+task adaptation stage. I.e. the paper should include the results of the following 4 types of experiments:
* Standard pre-training + standard adaptation
* Forgetting pre-training + standard adaptation
* Standard pre-training + language/task adaption
* Forgetting pre-training + language/task adaption
- Practically, efficiency (parameters and training) could be an issue with the language adaptation + task adaptation issue.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: * See weaknesses.
* Have you tried resetting other parameters during pre-training? What about other PLMs? e.g. Does deBERT style training affect the results?
* References:
1. Chen Liu, Jonas Pfeiffer, Anna Korhonen, Ivan Vulić, and Iryna Gurevych. Delving Deeper into Cross-lingual Visual Question Answering. Findings of EACL, 2023 → Partial resetting, and re-initialization of parameters for cross-lingual generalization in VL setting.
2. Vijaya Raghavan T Ramkumar, Elahe Arani, Bahram Zonooz, Learn, Unlearn and Relearn: An Online Learning Paradigm for Deep Neural Networks, TMLR, 2023 → Selective parameters resetting for continual learning (online and few-shot).
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: n/a
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your review. We appreciate that you recognise the effectiveness of our method and find our finding interesting. We would like to address your concerns as follows.
### First, we want to elaborate on our experimental setup.
> this work lacks proper ablation experiments on showing the gain is from resetting during pre-training, vs the gain of adapting/resetting parameters separately during the language+task adaptation stage. ...
We run quick experiments on Arabic and include numbers for standard adaptation here. However, we also want to highlight the difference between standard adaptation and language/task adaptation.
Standard adaptation relies on **labelled** data, which is expensive for a new downstream language. In contrast, the language/task adaptation **does not use any labelled data**. It only uses the unlabelled data from the new language. In our case, we only found $6.7K$ Arabic NLI data. The amount of labelled data is not enough to adapt an English NLI model to Arabic NLI without proper regularization.
Our experimental setup follows [18], where standard-pretraining + language/task adaptation (MonoTrans) is shown to be competitive among a few baselines for zero-shot unsupervised cross-lingual transfer. On top of this finding, our proposed forgetting method can further improve the sample-efficiency of the language/task adaptation, reducing the amount of unsupervised data needed for the new language. This is motivated by a practical scenario where the new languages contain only several thousands of tokens to a few millions of tokens (e.g. the corpus for the new language might contain only 2-3 books).
**Method ** | **Supervised Data** | **Unsupervised Data** | **Arabic XNLI ACC**
------------------------------------------------------|---------------------|-----------------------|---------------------
**standard pretraining + standard adapt** | 6.7K | 0 | 32.8
**standard pretraining + language/task adaptatioon** | 0 | 5M | 41.2
**forget pretraining + standard adaptation** | 6.7K | 0 | 34.2
**forget pretraining + language/task adaptation** | 0 | 5M | 59.7
### Second, we would like to address your questions.
**Q1**:
> Have you tried resetting other parameters during pre-training?
We did try resetting the bias terms in the language model head, though it didn't help much in our preliminary experiments.
We decided to focus on token embedding layer as prior work [18-24] demonstrate that the token embedding layer, which captures most lexical meanings, is crucial for cross-lingual transfer. Moreover, since the token embedding layer is not only the first layer but also the last layer (RoBERTa and many other language models use tied token embeddings), resetting the token embedding layer is equivalent to resetting the last layer. This echos the findings in [4,5,7,8,10], where resetting **later** layers is more effective to model plasticity.
In the future, it could be interesting to try out resetting other parts and see if they benefit particular tasks. For example, we can leverage some outcome from the line of interpretable LM. Say if we understand the functions of a particular subnetwork, we can consider selectively forgetting them to help relevant downstream tasks, achieving similar effects as parameter-efficient tuning methods[25-26].
**Q2**:
> What about other PLMs? e.g. Does deBERT style training affect the results?
We would like to extend our experiments to more pretrained models. deBERT sounds like a good candidate. However, we are currently bottlenecked by the computational resources, similar as many other pretraining research. One compiling of the entire experimental pipeline requires at least 2 pretraining runs, 48 language adaptation runs, 6 task adaptation runs, which takes about 38460 GPU hours (V100 32GB). We are working hard to extend to more models but would require more time and computational resources.
On the other hand, we chose RoBERTa as it is one of the most widely-used open-sourced pretrained language models. The effectiveness of our method on RoBERTa shows the potential of our method for the transformer family, as our method is not tied to a particular RoBERTa component but generalizable to any language model with a token embedding layer. We will open-source our code to facilitate future research on other pretrained models.
**References**:
Thank you for the references. We will add them to our camera-ready version.
-----
References 1-14 can be found in our response to Reviewer ar7w. References 15-17 can be found in our response to Reviewer TTi1
[18] Artetxe, Mikel et al. "On the Cross-lingual Transferability of Monolingual Representations." ACL 2020.
[19] Minixhofer, Benjamin, et al. "WECHSEL: Effective initialization of subword embeddings for cross-lingual transfer of monolingual language models." NACCL 2022.
[20] Dobler, Konstantin et al. "FOCUS: Effective Embedding Initialization for Specializing Pretrained Multilingual Models on a Single Language." arXiv preprint arXiv:2305.14481 (2023).
[21] Tran, Ke. "From English to Foreign Languages: Transferring Pre-trained Language Models." (2019).
[22] Jain, Neel, et al. "How to Do a Vocab Swap? A Study of Embedding Replacement for Pre-trained Transformers." (2022).
[23] Barret Zoph, Deniz Yuret, Jonathan May, and Kevin Knight. 2016. Transfer Learning for Low-Resource Neural Machine Translation. EMNLP 2016
[24] Chao Xing, el al. Normalized Word Embedding and Orthogonal Transform for Bilingual Word Translation. NAACL 2015.
[25] Hu, Edward J., et al. "LoRA: Low-Rank Adaptation of Large Language Models." ICLR 2021.
[26] Sourab Mangrulkar, et al. PEFT: State-of-the-art Parameter-Efficient Fine-Tuning methods. 2022
---
Rebuttal Comment 1.1:
Comment: Thank you very much for providing clarifications and additional supporting results.
Would you please comment more on the infrastructure you used for training and training time?
Will you open source your code and data to ensure reproducibility?
---
Reply to Comment 1.1.1:
Comment: Thank you for the recognition of our rebuttal. For you questions:
> Would you please comment more on the infrastructure you used for training and training time?
Sure. We run our experiments on a HPC cluster, where each node has $8$ GPUs, $500$ GB CPU memory and $80$ cores. Our main experimental GPUs are Tesla V100s, $32$ GB GPU memory, as described in Sec 4 of our paper. Our software infrastructure is pytorch and fairseq. We use FP16 training.
Each successful pretraining run (one hyper-parameter configuration) takes $24-32$ hours on $32$ V100s, ~ $1000$ GPU hours. The $32$ GPUs are spread on $4$ nodes with $8$ gpus on each node. Each language adaptation run takes the same time as one pretraining run except that we have to do it for all the languages. Each task adaptation run takes $6-12$ hours on $1$ V100 for each of the three tasks.
> Will you open source your code and data to ensure reproducibility?
Yes, we will open-source code. As for data, we use [CC100](https://data.statmt.org/cc-100/) for our pretraining and language adaptation (pretrained on English then adapted to a target language), which are publicly available. Our evaluation data are also public benchmarks: MLQA, XQuAD, XNLI. We will release relevant preprocessing scripts on these datasets for fostering reproducibility. | Summary: This work introduces a training technique that leverages actively resetting token embedding to improve zero-shot language transfer. Experiments on RoBERTa show consistent improvement in multiple languages, distant languages in particular.
Strengths: 1. Simple and innovative approach.
2. Consistent improvement across languages and tasks.
3. Insightful analysis.
Weaknesses: 1. Only experimented on one pretrained model.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: How sensitive is this method to the choice of forgetting frequency?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your review.
### We appreciate that you recognize the simplicity and effectiveness of our method. We address your comments as follows.
> Only experimented on one pretrained model.
We would like to extend our experiments to more pretrained models. However, we are limited by the computational resources. One compiling of the entire experimental pipeline require at least 2 pretraining runs, 48 language adaptation runs, 6 task adaptation runs, which takes about 38460 GPU hours (V100 32GB).
On the other hand, we chose RoBERTa as it is one of the most widely-used open-sourced pretrained language models. The effectiveness of our method on RoBERTa shows the potential of our method for the transformer family, as our method is not specific to a particular RoBERTa architecture but generalizable to any language model with a token embedding layer. We will open-source our code to facilitate future research on other pretrained models.
Q:
> How sensitive is this method to the choice of forgetting frequency?
Please refer to the general response.
---
Rebuttal Comment 1.1:
Comment: Thanks for the explanation. | Summary: This paper follows the language adaptation procedure of MonoTrans (Artetxe et al., 2020), and proposes a new pre-training method with active forgetting. By resetting the embedding layer every K updates during training, the language model learns to learn the new embedding fast, similar to a meta-learning effect.
The paper conducted experiments on several languages, and evaluate the adapted models on zero-shot XNLI, MLQA and XQuAD. Unlike the experimental setting of MonoTrans, the paper considers a low-resource pre-training setting where an "unseen" language has as few as 5 million tokens for the adaptation step. The results show that the pre-trained models with active forgetting pre-training converge quickly during language adaptation, and outperform the baseline models.
Strengths: The paper studies an important research question, i.e., language adaptation in a low-resource setting. Since the success of language models relies on large-scale pre-training, how to adapt the language models to low-resource languages without large-scale training data is beneficial for low-resource NLP research.
The proposed active forgetting is simple and effective for the following language adaptation step. Experiments demonstrate that it outperforms baseline models for zero-shot classification and question answering tasks.
Weaknesses: - I am confused about goals and means:
- The goal of language adaptation / MonoTrans (Artetxe et al., 2020) is to transfer existing high-resource language models (English in particular) to unseen languages, and the method is to learn language-specific word embeddings, i.e., language adaptation.
- The goal of this paper seems to improve language adaptation results, rather than the goal behind language adaptation. The key is, transferring the currently existing models rather than pre-training new models. I understand the proposed method can benefit language adaptation. However, it greatly limits the scope, only the pre-trained models with the active forgetting can have this effect, i.e., currently existing pre-trained models are out of scope.
- Experimental setup. As mentioned in L273, the paper claims that multilingual pre-trained language models have issues that they need large corpora. Since many studies have shown that multilingual models have cross-lingual transferability, training multilingual language models in the low-resource setting should be a baseline.
- It would be great to provide the pre-training loss curves, showing how active forgetting affects the pre-training procedure.
- (Minor) L192, Page7 footnote, missing ".". Figure 6, "XNLI Accuracy vs Adaptation Steps" is too small.
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: - If active forgetting even requires a re-pretraining step, how to adapt currently existing pre-trained models without re-training them?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 3 good
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your constructive comments. We would like to address your concerns as follows.
### First, we would like to clarify our goal, motivation and contributions.
> The goal of this paper seems to improve language adaptation results, rather than the goal behind language adaptation. The key is, transferring the currently existing models rather than pre-training new models.
We agree with you that it is important to study various ways of transferring existing pretrained models. And there are already plenty of nice works in this line, e.g. adapters[2,13], regularization[11] and many others. We have discussed them in the related work section. However, the `transferability` of existing standard pretrained models is limited, causing non-trivial efforts to adapt/extend these models to new languages[2,11-13].
An alternative way would be improving pretraining technology so that we can make more transferrable pretrained language models (PLMs). This line of research receives little attention despite its importance in alleviating the expensive cost of data and compute in downstream adaptation. As the field will see more pretrained models coming out in the following years, we argue that now is a good timing to consider such new pretraining technology. Because we are running out of high-quality data and language models are getting bigger and bigger.
This is highly relevant to the plasticity research [3-10], where plasticity is defned as changing model prediction **with as little new information as possible**. In the context of language adaptation, pretrained models with "plasticity" should adapt to new languages with as little data in the new language as possible, and therefore reduce the cost of data in downstream adaptation.
To sum up, our goal is not to reach another SoTA in language adaptation but rather using language adaptation as **a testing bed for understanding plasticity of language models**. We bring together efforts from different communities[3-10] and show that pretraining with active forgetting can be promising for improving language models' plasticity.
### Second, we want to elaborate on our experimental setup
> Since many studies have shown that multilingual models have cross-lingual transferability, training multilingual language models in the low-resource setting should be a baseline.
We agree that multilingual PLMs can be a meaningful baseline for grounding our numbers. We are training a multilingual pretrained baseline. We will update the results here once they are ready in the following days.
Beyond that, we would like to emphasise that our work tackles a different scenario. We aim to have a flexible language model. No matter the pretraining corpus is monolingual or multilingual, this language model should easily generalise itself to **unseen languages**. This is different from the scenario of multilingual PLMs like XLM-R[1], which requires seeing all the data for all languages from the scratch. Once done with pretraining and there is some new language distant from the pretraining languages you want to support, the multilingual PLMs might still struggle with **zero-shot transfer** as shown in several low-resources language research[15-17].
### Other Concerns and Questions
> It would be great to provide the pre-training loss curves, showing how active forgetting affects the pre-training procedure.
Please see the general response. The active forgetting creates an episodic learning pattern during pretraining, which is often seen in reinforcement learning or meta-learning.
> (Minor) L192, Page7 footnote, missing ".". Figure 6, "XNLI Accuracy vs Adaptation Steps" is too small.
Thank you, we will fix them in the camera-ready version.
Q:
> If active forgetting even requires a re-pretraining step, how to adapt currently existing pre-trained models without re-training them?
Thank you, this is indeed a valuable question.
As discussed in our related work section and Part 1, there are many ways to adapt existing pretrained models to new languages according to prior research [2,11,13]. Our contribution is rather on the pretraining side. We hope our work can inspire research institutions/companies to **improve pretraining technology** and deliver PLMs with more "transferability". In this way the downsteam users can adapt them cheaply and without too many tweaking.
On the other hand, it could be very exciting to take an existing model and make it perform active forgetting. We will leave this for future work since it is beyond the scope of this paper and requires further exploration.
-----
References 1-14 can be found in our response to Reviewer ar7w.
[15] Ebrahimi, Abteen, et al. "AmericasNLI: Evaluating Zero-shot Natural Language Understanding of Pretrained Multilingual Models in Truly Low-resource Languages." ACL 2022.
[16] Adelani, David Ifeoluwa, et al. "MasakhaNER: Named Entity Recognition for African Languages." TACL 2021.
[17] Adelani, David, et al. "MasakhaNER 2.0: Africa-centric Transfer Learning for Named Entity Recognition." EMNLP 2022.
---
Rebuttal Comment 1.1:
Comment: Thanks for addressing my concerns about the re-pretraining issue and the experimental setup. I have updated the score. | Rebuttal 1:
Rebuttal: We thank all the reviewers for their valuable comments. We would like to address some common questions in this general response. **Figures are attached in the rebuttal pdf.**
### Active Forgetting Creates An Episodic Learning Pattern
Reviewers are curious about the loss curves comparison for standard pretraining and forgetting pretraining. Hence we have included the loss curves for standard pretraining and forgetting pretraining in Rebuttal Figure 1. We can see that the forgetting pretraining creates a very interesting pattern of loss curves. If we zoom in a bit, as shown in Rebuttal Figure 2, episodic learning is happening while we are actually using the same data for pretraining. This is quite different from introducing diversity by including as many languages as possible in the pretraining corpus. It is more like in reinforcement learning or meta-learning.
### Impact of Forgetting Frequency
We would like to elaborate on our choice of forgetting frequency $K$. In our preliminary experiments, we tried $K=100, 1000, 5000$. We find $K=1000$ works well and thus sticks with it. Since we don't want to overtune the hyperparameters, we just use the same $K$ for all the experiments.
We include the loss curves of $K=100$ and $K=5000$ here. We can see that both forgetting too frequently and forgetting too infrequently will hurt the performance. Too frequent forgetting leaves little time for the body to learn something meaningful (the pretraining loss stuck around 11). Too sparse forgetting will make the body hard to adjust to the next forgetting, causing divergence as pretraining goes on.
Pdf: /pdf/15fdd22fd758de632819170cfc4708b174d42762.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: This paper presents an embedding forgetting mechanism for pre-training, aimed at enhancing robustness in downstream shift embedding fine-tuning. Focusing on the low-resource regime, the study conducts experiments on 10 simulated low-resource languages across three tasks: XNLI, XQUAD, and MLQA.
Strengths: (1) This paper introduces a simple yet effective method, akin to a form of regularization, for improving the multilingual capabilities of a PLM. It demonstrates impressive performance when adapting to downstream X-tasks.
(2) The writing is clear, and the illustrative figures are informative.
Weaknesses: (1) The paper is somewhat limited in scope, as it only applies to low-resource multilingual tasks within the framework of Artetxe et al. [2020].
(2) To propose a simple yet effective method, I believe solid experiments or rigorous proof are necessary; however, both are missing. In the experimental section, there is a lack of comprehensive ablation studies (e.g., why set low-resource data at 5M? Is this the real scenario in the multilingual setting? Why not use actual low-resource data instead of simulating one, considering that real low-resource data is different from sampled ones? How does the update frequency affect the results?).
(3) The paper lacks comparisons with other established methods, such as multilingual pre-training, multilingual adapters, and multilingual regularization techniques, among others.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: (1) Did you try different hyper-parameters for standard PLM and forgetting PLM when fine-tuning?
(2) Did you tie the weights of embedding and LM head?
Thanks for authors' detailed rebuttal, which addressed most of my concerns. However, the weaknesses are still existing. I'll keep my score.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: The authors have discussed the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for the review. We want to clarify a few misunderstandings and address your concerns as follows.
### First, we want to revisit and emphasise the scope of this work.
> (1) The paper is somewhat limited in scope, as it only applies to low-resource multilingual tasks within the framework of Artetxe et al. [2020].
Our motivation is to improve language models' plasticity. Plasticity of neural networks have been studied in graph learning, computer vision and reinforcement learning[3-10], where forgetting-relearn methods show promise. Our goal is to study plasticity in the context of pretrained language models. We believe this is a emerging research direction and will thrive in the following years.
However, translating the `plasticity` concept to the language model setting is not trivial due to the lack of clear experimental setups. We note that, despite the model differences, almost all language models begins with a token embedding layer. As often tied to a specific vocabulary, the token embedding layer limits the plasticity, preventing generalisation to a new vocabulary.
This observation inspires us to explore the plasticity of language models by manipulating the token embedding layer. Artetxe et al. [2020] draws our attention as it offers a nice experimental framework of **only manipulating the token embedding layer** for adapting between languages. We are not trying to improve SoTA multilingual models but rather testing if pretraining with active forgetting can be promising.
In this sense, our work is rather well-scoped instead of "limited in scope", if contextualised in the line of plasticity research[3-10]. Our choice of Artetxe et al. [2020] as the experimental framework is also well-justified. We will elaborate on this in our camera-ready version.
### Second, we want to address your concerns about our low-data experimental setup.
> Why not use actual low-resource data instead of simulating one
>
> Why set low-resource data at 5M?
We acknowledge that dealing with real-world low-resources languages can be more challenging than the low-data setup in our paper. We recognize there are rich work in this space: multilingual pretraining[1,12], multilingual adapters[2,13], and multilingual regularization[11]. We have discussed many of them in the related work section. However, the challenge of "low-resources" involve **multiple entangled factors**: the quality of the tokeniser, the amount of data, whether the script/language family is distant to the pretraining language(s) etc.
**Simulating allows us to control these factors and isolate the effects of the factor that we are interested in** -- the amount of data in the new language. This factor is essential to our work as our goal is `plasticity` i.e. rewiring model prediction with as little new information as possible. Simulating various amount of data in the new language allows us to compare model plasticity as shown in Fig 3 in our paper.
We observe forgetting PLMs outperform standard PLMs, when data is between 10K and 5M tokens. Since results on 5M is already decent enough to demonstrate the effectiveness of forgetting and most low-resources languages contain fewer than several million tokens, we chose to report the results on 5M. The curve in Fig 3 shows the results on other data amount.
In summary, the choice of simulating a low-data well suits our research goal and allows us to contribute a clean piece of knowledge in the line of plasticity research.
### Third, we believe our experiments are comprehensive and convincing.
> the study conducts experiments on 10 simulated low-resource languages
We evaluated our method on three widely used cross-lingual transfer benchmarks XNLI, XQUAD, and MLQA, encompassing a variety of languages and two different types of tasks, NLI and QA. As shown in Tab 1 in our paper, XNLI contains 14 languages. Therefore we disagree with your comment that we only experiment with 10 languages. This is an unfair comment about our work.
### Other Concerns and Questions
> How does the update frequency affect the results?
We assume you mean the forgetting frequency $K$. If so, please see the general response.
**Q1**: For fair comparison, we use the same hyper-parameters following previous work Kelly 2022 and RoBERTa. We will open-source the code.
**Q2**: The embeddings and the weights in LM head are tied as described in RoBERTa paper[14]
------
[1] Conneau, Alexis, et al. "Unsupervised Cross-lingual Representation Learning at Scale. ACL 2020.
[2] Pfeiffer, Jonas, et al. "MAD-X: An Adapter-Based Framework for Multi-Task Cross-Lingual Transfer." EMNLP 2020.
[3] Lyle, Clare, et al. "Understanding Plasticity in Neural Networks." ICML 2023
[4] Zhou, Hattie, et al. Fortuitous forgetting in connectionist networks. ICLR 2022
[5] Chen, Yihong, et al. Refactor gnns: Revisiting factorisation-based models from a message-passing perspective. NeurIPS 2022.
[6] Igl, Maximilian, et al. "Transient Non-stationarity and Generalisation in Deep Reinforcement Learning." ICLR 2020.
[7] Alabdulmohsin, Ibrahim, et al. The impact of reinitialization on generalization in convolutional neural networks. arXiv:2109.00267, 2021.
[8] Taha, Ahmed, et al. Knowledge evolution in neural networks. CVPR 2021.
[9] D’Oro, Pierluca, et al. Sample-efficient reinforcement learning by breaking the replay ratio barrier. In Deep Reinforcement Learning Workshop NeurIPS 2022.
[10] Nikishin, Evgenii, et al. The primacy bias in deep reinforcement learning. ICML 2022.
[11] Pfeiffer, Jonas, et al. Unks everywhere: Adapting multilingual language models to new scripts. EMNLP 2021.
[12] Pfeiffer, Jonas, et al. Lifting the curse of multilinguality by pre-training modular transformers. NAACL 2022.
[13] Ansell, Alan, et al. Composable sparse fine-tuning for cross-lingual transfer. ACL 2022.
[14] Liu, Yinhan, et al. Roberta: A robustly optimized bert pretraining approach." arXiv:1907.11692 (2019). | null | null | null | null | null | null |
Deep Reinforcement Learning with Plasticity Injection | Accept (spotlight) | Summary: This paper studies the loss of plasticity in deep RL and proposes a method for mitigating it at the cost of more overhead memory and computation, but doing so in such a way that does not necessitate adding more trainable parameters or affecting predictions. The latter two are key when using their approach for the diagnosis/analysis of plasticity loss in deep RL agents (as they remove confounders due to changes in exploration or representational capacity).
They also show that their method can be used to increase the size of the agent network during the course of training to better deal with plateaus in training and save computation cost (i.e. starting with a smaller network to save computation and increase the size when plateauing).
This method is also a nice addition to the Reincarnating RL framework, whereby reusing previously trained agents with plasticity injection could help both utilize what was learned in a previous iteration as well as keep the network flexible to absorb new knowledge.
Strengths: - Loss of plasticity in continual deep RL is a very interesting topic and this paper focuses on a methodology to both help analysis/diagnosis of this phenomenon as well as to mitigate it.
- The proposed method has other benefits such as allowing us to increase the size of the neural net during the course of training without retraining or affecting current predictions.
- Paper is well written and does a good job at introducing and motivating the problem, discussing related works, and exposing the ideas
- Perhaps what I'm most excited about in this paper is the following possibility with their *plasticity injection* approach:
"*An exciting avenue for future research is understanding trade-offs between architectural design decisions: RL agents typically employ networks that were originally proposed for stationary problems, but perhaps dynamically growing networks would suit the non-stationary nature of RL better.*"
Weaknesses: - The methodology of this paper in the short run would be beneficial, but it's a way around the real underlying problem: the real solution to this problem would likely emerge in the deep-learning toolbox.
- Full-sweep experiments would have helped with illustrative experiments in smaller domains without any exploration confounding altogether (e.g. Q-learning but with a full sweep over the full state-action space.)
- I like the experiment in Figure 1; helps illustrate the phenomenon nicely.
However, the description of what constitutes tasks at each interval could help understand the nature of this illustrative experiment better; e.g., how were the states selected using frozen policies (from every 10M frames), how many such states were selected (as many as the tasks?), how was MC applied to estimate values.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: 1. I agree that analysing plasticity in RL is confounded by exploration. But why not study this phenomenon in manageable finite environments and perform full sweeps over all states or state-action pairs? This could at least serve as a minimalistic way to fully remove the impact of exploration and solely focus on plasticity loss.
2. Regarding this claim (lines 298-300) "*However, the idea of plasticity injection is agnostic to the choice of the architecture. For example, it can be applied for residual blocks in ResNets or decoder blocks in Transformers.*"
I agree. But do we know whether such architectures still suffer from loss of plasticity when used properly in deep RL?
3. Is anything known about whether the root cause of plasticity loss in deep RL would differ from that of deep learning?
4. What measure do you use to automatically identify loss of plasticity or plateaus during training?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 3 good
Limitations: Limitations have been discussed sufficiently well.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the review and for the positive evaluation of our submission. We address your questions and concerns as follows:
- We acknowledge that plasticity injection is likely not the end solution to the phenomenon of plasticity loss. Currently, the phenomenon is not clearly understood and even reliably detecting it was outside of the reach. Plasticity injection is a step towards being able to diagnose plasticity loss and thus is a step on the way to understand its causes and arrive at an ultimate solution (indeed potentially based on the deep learning toolbox).
- We agree that experiments in smaller domains without exploration would have been helpful. We have considered the possibilities of using environments with finite state-action spaces where the Q-function would be represented as a table as well as environments where linear function approximation would suffice but figured that the plasticity loss phenomenon is most compelling to study in the case with neural networks and complex environments. Having said that, the work of Dohare et al. [1] presents experiments in controlled domains without exploration reminiscent of the setting we considered in Figure 1.
- Thank you for pointing out that the experiment in Figure 1 could be explained better. We will clarify the setting in a revision. In particular, we construct 20 tasks using policies stored after training the Double DQN agent for 10M, 20M, …, 200M frames. To collect states for each task, we let the stored policy interact with the environment in the evaluation regime, without any policy updates. We stop the interactions after the policy collects 4M states (the default size of the replay buffer for Double DQN). Finally, since we collect rewards from the environment, we could construct Monte-Carlo estimates of the value function in state s_t by taking the discounted sum of rewards r_t + gamma r_{t+1} + gamma^2 r_{t+2} + …. The resulting dataset with states and the MC estimates is then used for one of the 20 regression tasks.
- It is indeed compelling to consider tabular environments because of the clarity of the analysis they offer. One of the reasons why we decided to use complex environments like Atari directly is because it is not guaranteed that empirical observations about plasticity loss we would gain in tabular domains would be the same as in complex environments. However, we will think more about the possibilities of designing convincing experiments in environments with finite state-action spaces that could offer general insight on the phenomenon.
- Whether ResNets and Transformers suffer from plasticity loss is a great question. The research is still preliminary in this space but the authors are aware of two pieces of evidence suggesting that the answer might be “yes”: first, Lyle et al. [2] demonstrated loss of plasticity for both ResNets and Transformers on small-scale constructed MDPs; second, Adaptive Agent Team [3] argued that difficulties of training their architecture (that contained both Transformer and Residual blocks) were due to plasticity loss due to overfitting to initial experiences.
- Whether the causes of plasticity loss in deep RL and deep learning are different is another great question. The authors believe that there’s an overlap of causes as well as distinctive ones. As an example of a cause distinctive to deep RL, temporal difference learning poses a very peculiar optimization problem that might be one of the culprits of plasticity loss: inherently it learns a value function from its own bootstrapped predictions. Such a setting is not common for supervised problems with fixed prediction targets that deep learning typically deals with.
- We are not completely sure whether we have understood the last question correctly so we would like to kindly point out that identifying the loss of plasticity is one of the intended uses of the proposed plasticity injection approach. However, if we misunderstood the question, we are happy to engage in further conversation during the author-reviewer discussion period.
Again, we thank you for the review and for all of the comments.
[1] Dohare, Shibhansh, J. Fernando Hernandez-Garcia, Parash Rahman, Richard S. Sutton, and A. Rupam Mahmood. "Maintaining Plasticity in Deep Continual Learning." arXiv preprint arXiv:2306.13812 (2023).
[2] Lyle, Clare, Zeyu Zheng, Evgenii Nikishin, Bernardo Avila Pires, Razvan Pascanu, and Will Dabney. "Understanding plasticity in neural networks." In International conference on machine learning. PMLR, 2023.
[3] Adaptive Agent Team, Jakob Bauer, Kate Baumli, Satinder Baveja, Feryal Behbahani, Avishkar Bhoopchand, Nathalie Bradley-Schmieg et al. "Human-timescale adaptation in an open-ended task space." In International conference on machine learning. PMLR, 2023.
---
Rebuttal Comment 1.1:
Comment: Thank you for your response. Your response to my review as well as reading through other reviews has clarified my questions. I would like for this paper to be accepted and as such retain my evaluation.
---
Reply to Comment 1.1.1:
Comment: Thank you again for taking the time to review our submission and reading the rebuttal. | Summary: The author propose a solution to the problem of loss of plasticity (decrease in learning effectiveness over time). Their solution is to freeze the existing network and add two new heads, one free to learn, the other frozen to the negative of the free-learning new head. The result is that the intervention has zero effect on outputs at the time of plasticity injection, while allowing a freshly reset head to learn from scratch.
They suggest that the method can be used as a diagnostic tool, avoiding possible confounds from other methods that rely on resets and create large changes in outputs.
They also suggest that the method improves performance better than existing plasticity-restoring methods, while being less costly than training a larger network from time 0.
== Update after rebuttal ==
Considering the authors' reply, I have slightly updated my evaluation to lean more towards acceptance.
Strengths: The method is novel. The experiments seem thorough (though some clarifications are needed, see below).
Weaknesses: The main difficulty is that the method actually imposes a considerable computational cost and further complexity, while apparently providing a small benefit vs. the much simpler alternative of training a larger network from scratch (<10% total wallclock time, IIUC from Figure 6)
It also seems that the method introduces an additional parameter (when to inject plasticity) against the simpler method of training a larger network from scratch.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: - Figure 1: The successive “tasks” are not random. There is an order induced by the fact that they represent results of a slowly learning and improving policy: the policy at 20K vs 10K or 4K is not just different, it is better trained. This might be a confound if the training impacts (say) the complexity of the actual value function. What happens if you do the same figure but randomizing or inverting the order of the successive policies whose value is being predicted? (i.e. at iteration K, the task is to predict the values for K’-th frozen policy, with K’ != K)
- l. 233: affect -> affected
- Figure 4: “We take the maximum score among the agents with plasticity injection after 25M, 50M, and 100M steps" - come on! Even adding random noise to the values would provide an apparent "net improvement" under such a selection method! At the very least least show us the results with injection at 50M for all.
- In Fig 6a, at what time do we perform the interventions? 50M?
- IIUC the difference between "Width Scale" in Fig 6a and "Larger Net" in Fig 6b is that the former only enlarges the networks at 50M (?), while the latter is larger from time 0. Is this correct? If so, it should be stated in the main text.
- There is an alternative to the tripling of head parameters imposed by the method. Just add one new freshly initialized head, with a scalar multiplier on its output initially set to 0. Presumably this should reduce the added cost while providing equivalent results. What happens if you do that?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: The authors do mention some limitations of the work, especially the cost.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for taking the time to review our paper. We address your concerns below (AR = author response).
*“The method actually imposes a considerable computational cost and further complexity, while apparently providing a small benefit vs. the much simpler alternative of training a larger network”*
**AR**: We appreciate your concern about the benefits of plasticity injection (PI) compared to the use of a larger network. Whether the additional complexity is justified depends on the scale of the setting. As an extreme example, Agarwal et al. [1] calculated that reproducing the results of the AlphaStar agent [2] would cost $3M+. We believe that saving 10% of computations in such settings would justify the additional complexity from PI. As the models get larger, it is likely that this type of scenario will become increasingly common.
Another benefit of plasticity injection is that it allows adding extra plasticity anytime whilst having a large network requires committing to a fixed network size in advance. In other words, if an RL practitioner estimated incorrectly the needed network size before launching an experiment, PI could be used to extend the network on the fly.
Lastly, we would like to emphasize another use case of plasticity injection we argue for: as a tool for diagnosing plasticity loss, which is one of the main contributions of the paper. In our experience, Deep RL systems are notoriously challenging to debug, and being able to isolate plasticity concerns from other confounding factors present in RL, such as exploration, would be valuable for understanding the source of performance bottlenecks.
*“The method introduces an additional parameter (when to inject plasticity) against the simpler method of training a larger network”*
**AR**: Thank you for raising the point about the injection timestep hyperparameter. Firstly, we have verified that plasticity injection is robust to the choice of the timestep: Figure 9 demonstrates that the IQM scores after applying PI at 25M, 50M, and 100M are roughly the same. Secondly, we have experimented with an adaptive criterion for injection that gets rid of the necessity to specify the injection timestep (see Appendix B, “Adaptive Criterion for Injection”): with such a criterion, the agent achieves the same IQM as with using PI at 50M frames, however, we decided to use the version of PI with a specified timestep for simplicity and clarity.
Please also find answers to your questions below:
- **On the task order.** Thank you for this question, it’s indeed possible that the prediction problem difficulty is growing over time. To address this concern, we proceeded with the reviewer’s recommendation and tried to randomize the order of the policies to evaluate; the pattern was the same: if we preserve the parameters after every task change, the final value prediction error grows for each subsequent task. Crucially, please also note that resetting the network parameters after each task switch serves as a control: since the randomly initialized network attains the same final loss on each task, we can conclude that all prediction problems in the sequence are of similar difficulty.
- **Typo.** Thanks for catching the typo, we’ve fixed it.
- **On taking the max score after PI at 25M, 50M, and 100M frames.** We emphasize that *we do not use Figure 4 to give an unbiased estimate of the performance of PI*. We agree that such a selection strategy incurs a risk of overestimating the effectiveness of the method Figure 4 is intended to *illustrate the potential performance improvements that could be gained from eliminating plasticity loss*. We will give a more detailed clarification in a revision. In Section 5.3, where we argue that PI outperforms alternative methods, we use PI applied at 50M without taking the maximum score. Figure 7 in Appendix gives per-environment results for PI applied at 50M frames.
- **Intervention timestep in Figure 6, left.** Injection and Width Scale are applied after 50M frames; for Resets and SnP we have performed a grid search and reported the best results after applying Resets / SnP either once at 50M frames or trice at 50M, 100M, and 150M. Appendix C gives all details about the baselines.
- **Difference between Width Scale and Larger Net.** Your understanding of the difference is correct: the former enlarges the network at 50M steps, while the latter is larger from the start. We’ll clarify that in the revision.
- **On having a single extra head with an initial scalar multiplier set to 0.** That’s a good point. We have tried to use a scalar multiplier on the outputs of the second network. The challenge is what to do with the multiplier after setting it initially to zero. Both learning this scalar and manually annealing it resulted in unstable training so we discarded this option. Now, your direction of thinking is still valid: it might be possible to minimize the effects of the second network by, for example, modifying its initialization scheme. From the analysis viewpoint, we strived for clarity: changing the initialization is undesirable because it would introduce a confounder for our analysis (since initialization affects the signal propagation and thus network plasticity); from minimizing the computations viewpoint, this avenue might be promising though.
We thank you again for the review and all the questions and hope that the rebuttal addresses the outlined concerns. We will happily engage in a subsequent conversation during the author-reviewer discussion stage.
[1] Agarwal, Rishabh, Max Schwarzer, Pablo Samuel Castro, Aaron C. Courville, and Marc Bellemare. "Reincarnating reinforcement learning: Reusing prior computation to accelerate progress." Advances in Neural Information Processing Systems 35 (2022): 28955-28971.
[2] Vinyals, Oriol, Igor Babuschkin, Wojciech M. Czarnecki, Michaël Mathieu et al. "Grandmaster level in StarCraft II using multi-agent reinforcement learning." Nature 575, no. 7782 (2019): 350-354.
---
Rebuttal Comment 1.1:
Comment: Thank you for your considerate response. In light of the clarifications and other reviewer's comments, I have modified my assessment to lean more towards acceptance.
Re: single additional head with multiplier: the instability might be addressed by putting a squashing nonlinearity (e.g. tanh) on the multiplier, though perhaps you've already tried this obvious fix.
---
Reply to Comment 1.1.1:
Comment: We are grateful for the decision to increase the score, thank you.
We have indeed tried to use the sigmoid non-linearity for the multiplier to ensure that it belongs to the (0, 1) range. However, in our experimentation, even this version was unstable. It is a promising direction though and we will keep an open mind for possibilities for ideas in this space. | Summary: The work attempts to study the role of plasticity loss in deep reinforcement learning. The paper proposes a new tool to diagnose plasticity loss. The new tool is designed to minimize the effect of other confounders like exploration. The results on Atari show that plasticity loss affects the performance of double DQN, and plasticity injection often breaks performance plateaus.
Strengths: The paper's main contribution is a new diagnostic tool for plasticity loss. The complexity of deep RL makes it challenging to isolate phenomena like loss of plasticity and study them carefully. The proposed tool, plasticity injection, does an excellent job of isolating the effect of plasticity loss while minimizing the impact of other confounders.
The empirical results show that double DQN suffers from plasticity loss in Atari games. And plasticity injection improves performance in almost all games.
The improved performance is particularly exciting because agents like Double DQN are tuned to achieve high performance in 200M frames. My guess is that if we let these agents run for longer, their performance will plateau in most games. So, the effect of plasticity loss will be more apparent if these agents learn for longer, say 1B frames. And in such long-lived agents, loss of plasticity will become a more dominant phenomenon. So it's interesting to see plasticity loss even with parameters tuned for good performance in 200M frames.
Weaknesses: The proposed diagnostic tool is well thought out and works as intended. However, other contributions in the paper have some issues.
The main problem lies when comparing other methods like SnP and Width Scale with plasticity injection. It is not clear if other methods were tuned properly. The appendix said that some parameters of SnP were tuned, but the paper does not mention if other parameters, like learning rate, replay ratio, or buffer size, were tuned for any of these methods. Unfortunately, that is the downside of using only large environments like Atari; we can not perform rigorous empirical investigations, limiting the conclusion we can draw from some of the experiments, like Figure 6.
A small issue with the paper is that it only studies value-methods methods and not policy gradient methods, so its title is a bit misleading. A better title would be _Value-based Deep RL with Plasticity Injection_.
However, these issues do not affect the paper's primary contribution, so I recommend accepting this paper.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: The results with L2 regularization in the appendix are surprising. Prior results on plasticity loss have shown that L2 regularization helps mitigate it. Figure 1 in the paper by Sokar et al. (2023) shows that L2 regularization improves the performance of DQN. Why do you think there is a discrepancy between the effect of L2 regularization in your results and previous results?
I guess that because the parameters of double DQN are not tuned for L2, we might need to tune all the parameters of double DQN to benefit from L2.
Sokar, G., Agarwal, R., Castro, P. S., & Evci, U. (2023). The dormant neuron phenomenon in deep reinforcement learning. arXiv preprint arXiv:2302.12902.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations: The main limitation is that other methods are not properly tuned, so the conclusions we can not draw strong conclusions from experiments in Section 5.3.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for taking the time to write the review. We are grateful for the positive evaluation of our submission. We provide responses to your questions and concerns below.
We acknowledge that due to the scale of the experiments we did some hyperparameter tuning for methods like SnP and Width Scale but not a fully exhaustive one (that would include the learning rate, the replay ratio, and the buffer size). Since we have used the default hyperparameters of Double DQN and did not tune them when using plasticity injection (i.e. leaving the LR, the RR, and the buffer size untouched), we hope that during the comparison with other methods in Section 5.3 we did not give a disadvantage to the alternative methods.
As for the title of the paper, even though we indeed used value-based RL as a testbed, the plasticity injection is agnostic to the choice of the family of algorithms and could be applied to function approximators both for values and policies.
Lastly, it is indeed slightly puzzling to see the discrepancy between our L2 results and the results reported by Sokar et al. After checking the details of their paper (Appendix A on Page 14), it appears that Sokar et al. also did not tune the hyperparameters of their agent to be compatible with L2, they have done only a grid search over the regularization coefficient. However, there are several details about the setting that could explain the difference: first, Sokar et al. used a subset of 17 Atari games, while our setting considers all 57 Atari games; this subsampling might have affected the conclusions. Likewise, their experiments with L2 on Atari use 10M frames, while we learned agents for 200M frames. Finally, their base algorithm was DQN while ours was Double DQN; perhaps L2 was mitigating the overestimation bias by implicitly controlling the magnitude of the outputs through controlling the magnitude of the weights. Overall, the effect of L2 in deep RL is quite mysterious and it is surprising to realize that the majority of standard algorithms like DQN, PPO, and SAC do not use it. More research is needed to understand clearly the interaction of L2 regularization with RL.
Thank you again for the thoughtful review and comments.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for their response and clarifications. I maintain my score and recommendation of acceptance.
---
Reply to Comment 1.1.1:
Comment: Thank you again for the thoughtful review and for reading our rebuttal response. | Summary: This paper provides a deep dive into the issue of plasticity loss in deep reinforcement learning and introduces a novel approach, "Plasticity Injection". The authors employ this technique to not only diagnose plasticity loss but also effectively dissect confounding factors, such as exploration mechanisms. Through extensive experiments, the authors verified that Plasticity Injection significantly increases the agent's performance with minimal computational overhead. The comprehensive investigation of the role of resets in the plasticity viewpoint further enriches the paper's contributions to the field.
Strengths: 1. Building upon the reset mechanism, this paper offers a successful strategy for tackling plasticity loss in Deep RL, while dissecting the role of exploration.
2. Plasticity Injection module is simple and straightforward, highlighting its wide potential applicability across various algorithms and architectures.
3. The authors conduct an in-depth analysis of the elements contributing to performance improvement.
4. I liked the qualitative analysis of why the score of Assualt gets stuck.
Weaknesses: 1. While the Plasticity Injection module was successfully integrated into the Atari environment, it would benefit from broader testing across various environments, considering that the reset mechanism [1] has consistently performed well in the Deepmind Control Suite environment.
2. Related to Weakeness 1, recent work from Lyle et al [2] have found that the last layer reset does not improve the model's plasticity in their synthetic experiments. Can we really confirm that the reset mechanism is improving the performance on the Atari environments because it is improving the model's plasticity?
[1] The primacy bias in deep reinforcement learning., Evgenii Nikishin et al.
[2] Understanding plasticity in neural networks., Clare Lyle et al.
Technical Quality: 4 excellent
Clarity: 3 good
Questions for Authors: 1. Can the authors provide the results of the experiments conducted for the Adaptive Criterion for Injection and L2 Regularization? It seems that these results are missing in the appendix section.
2. A recent investigation from Lee et al [1] has claimed that the reset mechanism is closely tied to the non-stationarity of labels in reinforcement learning while having minimal impact in terms of the inputs' non-stationarity. I would like to see whether the Plasticity Injection (Reset without exploration) behaves similarly to the Reset.
[1] Enhancing Generalization and Plasticity for Sample Efficient Reinforcement Learning., Hojoon Lee et al.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 3 good
Contribution: 4 excellent
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for taking the time to review our submission and for the thoughtful comments. We provide responses to your questions and concerns below.
Thank you for raising the point about testing plasticity injection outside of Atari. We have conducted an experiment in a simple non-Atari setting based on CIFAR10. In detail, we split the dataset into 10 parts uniformly at random and fit the network sequentially on the first part, then on the second, and so on. The default network gradually loses the plasticity as it faces subsequent parts of the data, while the network with plasticity injection is able to learn on all parts. The results are attached results in the supplementary PDF that could be found with the joint response. Because of the time and resource constraints, we provide only this simple experiment but we hope that it can serve as a piece of evidence of the applicability of plasticity injection outside of Atari.
As for your concern about the findings of Lyle et al., we would like to point out that the argument in their paper is more nuanced: indeed, the solution that they argued for in the end was not based on resets because one of their motivations was to come up with a non-drastic method for addressing plasticity loss, but the experiment in Figure 6 of their paper actually shows that resets mitigate plasticity loss.
We articulate the results of the experiments with adaptive criterion and L2 regularization in the appendix in textual form (L542, L550). We did not include a separate figure for these results because the IQM for the adaptive criterion coincides with the IQM after applying plasticity injection at 50M frames; likewise, the IQM of the agent with the best L2 coefficient coincides with the IQM of the base Double DQN agent.
Lastly, thank you for putting the paper of Lee et al. from 2 months ago on our radar! It is indeed an interesting question whether PI helps with the input or target non-stationarity and there’s potential evidence for both: in the CIFAR10 experiment that we introduced earlier and that resembles one of the experiments in the paper you have suggested, the target dependency is fixed and it is a set of datapoints that is changing. At the same time, since the extra network is learning a residual to the prediction of the frozen network, we could also say that PI could be helpful with the target non-stationarity. Carefully investigating this question would be an exciting avenue for future research.
---
Rebuttal Comment 1.1:
Comment: I appreciate the authors' thorough response to the initial review. Their clarifications have relieved most of my concerns. **In response, I'm adjusting my score for this paper to 7** (I think it's not possible to edit the initial review so will just leave it as a comment).
I do, however, have further questions related to the CIFAR-10 experiment, especially concerning the reset mechanism. I'd like to stress that these are just questions of curiosity:
> Training vs. Test Accuracy with Reset Mechanism on CIFAR-10:
The insights provided about the reset mechanism, especially its impact on gradient flow and training accuracy, are intriguing. However, I'm curious about its performance in test scenarios. Specifically, how does the training accuracy correlate with test accuracy when this plasticity injection mechanism is employed? Is there any difference in test accuracy between plasticity injection vs reset?
> Comparison with Other Plasticity-Preserving Techniques on CIFAR-10:
The paper's exploration of plasticity injection is noteworthy. I'm curious, however, about how it compares to other methods like spectral normalization, which is also known for preserving neural plasticity. A brief discussion or comparison of plasticity injection against such techniques, particularly in the context of datasets like CIFAR-10, would offer a more rounded perspective on the efficacy of the proposed method.
---
Reply to Comment 1.1.1:
Comment: Thank you for increasing the score! We deeply appreciate it.
As for the training vs test accuracy on CIFAR10, we have two observations to offer:
1. If the plasticity injection mechanism is employed, the relationship between train and test is the same as for the network without resets and before plasticity injection;
2. The final test performance of the network with plasticity injection vs with resets is the same, however, a network with PI does not suffer from a test accuracy drop after every task switch and hence has higher test accuracy during the initial iterations.
Thank you also for the suggestion about contrasting additionally to methods like spectral normalization.
1. Following your recommendation, we have tried using SN in the CIFAR10 setting and observed that it indeed allows maintaining plasticity longer than the network without SN; however, as the number of data shifts increases, even the network with SN eventually demonstrates loss of plasticity.
Plasticity injection, on the other hand, could be applied an arbitrary number of times, allowing to maintain plasticity with an arbitrary number of data shifts (although at the cost of increasing the total number of parameters).
2. Mechanistically, SN approximately ensures that the largest singular value of a weight matrix it is applied to is 1. This will alleviate causes of plasticity loss related to the weight magnitude growth and ill-conditioning but might not address other potential causes of plasticity loss, such as ReLU neurons becoming dead. PI, in this sense, is agnostic to the causes of plasticity loss because of adding freshly initialized parameters. Figure 5(d) in the submission supports that: even for the network equipped with SN, applying PI yields positive improvements, suggesting that PI addresses some of the causes of plasticity loss that are not addresses by SN. | Rebuttal 1:
Rebuttal: We thank all reviewers for their efforts to provide the detailed feedback. We particularly appreciate that reviewers find the paper novel (Reviewers 227D, Foz7), well-written (x17L), comprehensive (227D), thorough (Foz7) as well as highlight that the method is simple (227D), well thought out (P5kx), and with an in-depth analysis of its elements (227D).
We provide responses to reviewers’ questions and concerns in individual replies and attach a supplementary PDF with an experiment aiming to address a concern raised by reviewer 227D.
Pdf: /pdf/fbf02d1df62d4a63b9faef53b100eaf6b6da943c.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Simple and Controllable Music Generation | Accept (poster) | Summary: This paper introduces MusicGen, an innovative single-stage transformer language model for conditional music generation, using either text or melody as a conditioning factor. The authors focus on exploring different approaches of codebook interleaving patterns and propose an efficient approach of Delay Pattern. Experiments were performed with state-of-the-art conditional music generation models, and multiple ablation studies aids the efficacy of the proposed approach. The authors have further contributed to the community by releasing demo samples and inference code to the public.
Strengths: - The paper is presented clearly, which greatly aids comprehension of the presented methods and results.
- The proposed *Delay Pattern* is a novel approach that shows strengths in computational time. It’s surprising to see the decoder interpret the residual codebooks of the Delay Pattern, even when the delayed codebooks are summed and used as input. A more in-depth theoretical explanation of this would be valuable.
- The empirical results are impressive, where the evaluation against state-of-the-art text to music generation systems provides solid evidence for the model's performance.
- The authors' decision to open-source the code and model weights could significantly impact both academia and industry. I’m already seeing some online applications utilizing MusicGen's API.
Weaknesses: - The reproduced *Valle-E pattern* among the *codebook interleaving patterns* seems inconsistent with the original manuscript [A].
In the Section **Introduction**, the authors correctly mention the codebook generation steps of *Valle-E* as “(i) modeling the first stream of tokens only; (ii) then, apply a post-network to jointly model the rest of the streams in a non-autoregressive manner.”
Yet, the authors reproduced the *Valle-E* pattern as the step (ii) being decoded in an autoregressive manner. The correct pattern of *Valle-E* would be to use two different decoders and perform step (ii) in a nonautoregressive manner (given the autoregressively generated tokens from step (i)), resulting in sequence steps of $n+3$.
- Thus, this discrepancy should be addressed either by 1. correcting the *Valle-E* pattern methodology or 2. simply renaming the “Valle-E Pattern” in the manuscript. #1 option would be to rectify *Figure 1, Section 2.2, Equation (6), Table 3 (Nb. steps)*, and updating the results of *Table 3*.
- Some aspects of the paper lack details and justifications of method choice.
1. According to the ablation study results of *text augmentation* Table A.2, the best performing method is applying only the *condition merging*. Since the authors are using *Condition merging + Text-norm. + Word dropout* as the final text augmentation strategy, they should provide a justification for this.
2. The final choice of text encoder is unclear. Although the authors have stated that they performed an ablation study using all three different text encoders (T5, Flan-T5, and CLAP), the reviewer couldn’t find an indication which model was selected as the final text encoder. Furthermore, the results from the ablation study Table A.1 does not align with any of the scores from Table 1 & 2. If the authors used different configurations for Table A.1, they should mention it in the manuscript.
- typo at line 196: “hope size” → “hop size”
[A] Wang, Chengyi, et al. "Neural Codec Language Models are Zero-Shot Text to Speech Synthesizers." *arXiv preprint arXiv:2301.02111* (2023).
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: Please see the concerns raised at **Weakness** section. + theoretical explanation of the decoder interpreting *Delay Pattern* mentioned at **Strengths** section.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations: The paper comprehensively outlines its limitations and discussions in the manuscript, which align with the reviewer's.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **_Regarding the VALL-E codebook pattern_:** Indeed, we incorrectly describe the “VALL-E” codebook patterns as corresponding to VALL-E. We renamed it to “flat 1st codebook then parallel”.
**_Regarding text augmentations_:** During model development, we experimented with condition merging, word dropout, and text normalization. For the final models, we use condition merging + word dropout as it gives the best overall performance. We agree the description in the text is confusing and we will clarify that for the final manuscript. To better support our design choice, we provide here a human study (OVL. and REL.) in the table below.
### Text augmentations Human study
| Configuration | FAD | KL | CLAP sc. | OVL. | REL. |
|-----------------------------------------------|----------|------|----------|----------------|----------------|
| No augmentation | 3.68 | 1.28 | **0.31** | **83.4** ± 1.4 | 81.2 ± 1.3 |
| Condition merging | **3.28** | 1.26 | **0.31** | 82.6 ± 1.4 | 84.5 ± 1.2 |
| Condition merging + Text-norm. | 3.78 | 1.30 | 0.29 | 80.6 ± 2.1 | 82.4 ± 1.1 |
| Condition merging + word dropout | 3.31 | 1.31 | 0.30 | 82.5 ± 1.6 | **85.3** ± 1.0 |
| Condition merging + Text-norm. + Word dropout | 3.41 | 1.39 | 0.30 | 81.2 ± 1.9 | 84.3 ± 1.6 |
Results suggest condition merging + word dropout provides comparable generation quality but better text relevance scores.
**_Regarding the used text encoder_:** We use the T5 text encoder as noted in line 192. Considering all metrics, the T5 model gets the best overall performance (Table A.1). The configuration used in Table A.1 is indeed different. For Table A.1, as this is an ablation study, we use a 300M parameters model (see lines 170-171). We now indicate clearly when an Ablation is done with a 300M parameters model, and in Table A.1 we further repeat that T5 is the encoder used for the main models in the caption of Table A.1.
**_Regarding typos_:** Thanks! We will fix the typo. | Summary: MusicGen is an auto-regressive architecture for music audio generation conditioned on textual descriptions and an optional melody. The key proposal is a generic formulation of audio codebook interleaving strategy, which enables parallel code streams to be processed with a simple single-stage Transformer decoder while balancing efficiency with inter-stream dependency. Through extensive evaluation, MusicGen demonstrates a superior performance in acoustic coherency and semantic faithfulness compared to previous models. Sufficient ablation studies also validate its design choices for text pre-processing, melody conditioning, code interleaving pattern, and model scale.
Strengths: * **Unsupervised melody conditioning**: The paper uses quantized chromagrams to capture the most salient melodic features to conduct melody conditioning. To suppress interference with low-frequency instruments, it further leverages a pretrained source separation model to detach those bands before extracting chromagram. Such an unsupervised scheme is efficient, sound, and working reasonably well.
* **Ablation study**: Extensive empirical studies are conducted to validate the key design choices, which are beneficial for the reference of follow-up research.
* **Compelling performance**: Demos demonstrate a compelling acoustic and semantic quality. It is good to know that code and models will be open, for which a good impact of this work can be expected.
Weaknesses: * **Long-term music structure**: The paper demonstrates several long generation examples created using windowed sampling, showcasing the ability of MusicGen to produce remarkably authentic grooves and looping patterns. However, one noticeable weakness lies in the relative absence of a distinct phrasal or sectional structure in the long run, which may lead to a lack of sense of musical development. To tackle such problem may require more musically insightful inductive bias in the framework design.
* **Fine-grained control**: The text-conditional architecture of MusicGen focuses on global semantic control, which essentially guarantees a globally consistent “picture” throughout the generated piece. On the other hand, musicians sometimes intentionally want to create variations and inconsistency to make the piece more impressive and contagious. Achieving such purpose would require a more fine-grained controlling scheme, which is not currently supported by MusicGen.
Technical Quality: 4 excellent
Clarity: 3 good
Questions for Authors: * There seems a notation clash on the variable $N$. On line 74, $N$ is defined as codebook size, but is later used to represent code sequence length in line 98, line 145, and the caption of Figure 1.
* Line 95: Should the autoregressive steps of MusicLM be $df_r\cdot K$, rather than $df_s\cdot K$?
* Line 177: What is the purpose of applying EMA to the Transformer model? Shouldn’t it be part of the audio tokenization model for updating the codebook?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 3 good
Contribution: 3 good
Limitations: In my view, the main potential limitation of this work lies in long-term music structure and fine-grained control, which has been detailed in the Weaknesses section. The latter has also been discussed in the paper. However, I understand that this paper focuses on text semantic control (with already compelling results) and those areas are currently out of the scope.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **_Regarding long-range musical structure_:** We agree that evaluating musical structure in the generated music is interesting and important for music generation. However, it is far from trivial as the generated output is audio samples and not interpretable discrete representations (like midi). Developing such metrics is an important future research direction, we hope the community (including us) will pursue it. Nevertheless, we believe MusicGen is an important step in the right direction of developing high-quality music generation models.
**_Regarding fine-grained control_:** We agree fine-grained control can benefit the music industry and specifically music creators. However, we believe global control is also important and can benefit creators who are not expert musicians. Nevertheless, we believe MusicGen is an important step in the right direction toward developing controllable music generation.
**_Regarding notations_:** We fixed this notation. We renamed the $N$ which corresponds to the time steps into $T$ in line 98 and Figure 1.
**_Regarding line 95, MusicLM_**: Thanks for the correction! We fixed it for the final manuscript.
**_Regarding EMA_:** During preliminary experiments, we notice using an exponential moving average during model training smoothes out the validation loss. Such a technique was also used in prior work with transformers, see [1].
[1] Touvron, Hugo, et al. "Training data-efficient image transformers & distillation through attention." International conference on machine learning. PMLR, 2021.
---
Rebuttal Comment 1.1:
Comment: Thanks for your response to my review.
At a system level, the technical contribution of the paper is undoubtedly solid. While the work may exhibit a certain lack of scientific novelty and deeper insights into the music generation task, I am convinced that "MusicGen is an important step in the right direction". The additional experimental results on stereo generation and memorization analysis would undoubtedly deepen the potential impact of this work. I appreciate the efforts of the authors and would maintain my recommendation of acceptance. | Summary: The paper proposes a single-stage music generation model that can input text or melody. The proposed approach uses tokens from pre-trained neural audio codec tokens with multiple residual vector quantizers and investigates efficient language modeling to reduce the length of autoregressive steps. Experiments on large-scale datasets show better generative results than some baselines.
Strengths: 1. It is challenging to achieve strong performance and efficiency simultaneously, but this paper shows these could be achieved elegantly.
2. The samples in supplementary have high audio quality and text-music coherence. I could perceive the difference in comparison to other approaches.
3. The paper is generally written well. The motivation and demonstrations are clear.
4. I am glad that the code and models will be available for future comparison and reproduction.
Weaknesses: 1. While the generative samples are impressive, I could see the technical novelty in this paper is limited. The audio tokens or EnCodec models are widely used in various applications. The 'parallel' or 'delay' codebook interleaving patterns are proposed in previous papers. The text2music or melody2music tasks have been established. I was having a difficult time capturing the core contribution of this paper. From my perspective, the main contribution of this paper would be to perform extensive experiments, analyze the results and make comparisons to demonstrate the critical components for efficient modeling. From this aspect, additional experiments (discussed in the questions) might be needed to convince me fully.
2. Nowadays, we know data is a core part of reaching successful training, and obviously, the collected licensed music data probably won't be released. It is not a unique situation, but reproduction and future comparison will become an issue, and I wish authors could discuss this aspect. Besides, a genre distribution of the collected data might be helpful.
3. Evaluation of generated music is another tricky thing. I understand that FAD, KL-div, CLAP consistency, and subjective evaluations are almost the standard evaluation metrics, but none consider whether the musical structure makes sense. I am wondering when the evaluation gap between symbolic (midi) based and audio-based methods could be consistent.
4. In the MusicLM paper, there is a specific section about data memorization, which could be problematic, especially for licensed data. I want authors to discuss how such an issue could be avoided/improved as the technical components become mature.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: 1. How does the number of codebooks affect the modeling? The original Encodec has 8 codebooks, and SoundStream has 12 codebooks. The 'flatten' idea becomes much more computationally expensive, but how could you ensure it would degrade the performance when the number of codebooks increases? Similarly, the codebook size also could change the difficulty of the optimization.
2. the authors tried various text encoders in the main paper, but the generated samples are not described. In particular, I am interested in comparing CLAP text embedding and T5.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations: As I mentioned in the weakness, data memorization could be an issue and I am looking forward to the author's response.
Flag For Ethics Review: ['Ethics review needed: Compliance (e.g., GDPR, copyright, license, terms of use)']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **_Regarding the contribution of the proposed method_:** The novelty and contribution of our work are designing a simple and efficient auto-regressive model to perform text-to-music generation. Unlike prior work, which consists of a cascade of models using either super-resolution or semantic tokens, we present a single-stage language model through an efficient codebook interleaving strategy. With the codebook-interleaving patterns, we provide a framework that generalizes prior work in the field. Additionally, we present a single model to perform both text and melody-conditioned generation and demonstrate that the generated audio is coherent with the provided melody and faithful to the text-conditioning information.
Following the reviewer's concern, we additionally include an extension of MusicGen to stereophonic music generation, making it the first model to generate stereophonic music conditioned on textual descriptions. OVL. and REL. scores can be seen in the table above (in response to reviewer A118) (and inside the attached rebuttal PDF file, Table 1), samples were shared with the AC through an external link as per the NeurIPS guidelines. Due to all of the above, we believe our work is novel and the contributions are important and would be valuable to the community.
**_Regarding the used dataset_:** our dataset consists of 20K hours of licensed music, comprising 10k high-quality music data, ShutterStock, and Pond5 music data collections, which are available for licensing (lines 202-206). As per the reviewer's request, we provide a genre distribution of the training set in the attached rebuttal PDF file (Figure 3). Regarding comparison to MusicGen, we will open-source both training code and pre-trained models, so future research could: (1) directly compare the MusicGen models, and (ii) build new and improved music generation models on top of MusicGen.
**_Regarding evaluation function_:** We agree that evaluating musical structure in the generated music is interesting and important for music generation. However, this is far from trivial as the generated output is audio samples and not interpretable discrete representations (like midi). Developing such metrics is an important future research direction, we hope the community (including us) will pursue it. Nevertheless, we believe MusicGen is an important step in the right direction of developing high-quality and controllable music generation models.
**_Regarding memorization metrics_:** As per the reviewer’s request, we follow Agostinelli et al. [2023], and analyze the training data memorization abilities of MusicGen. We consider the first stream of codebooks from MusicGen as it contains the coarser grain and most important regarding the generated audio. We randomly select $N=20,000$ examples from our training set and for each one, we feed the model with a prompt of EnCodec codebooks corresponding to the original audio and the conditioning information. We generate a continuation of 250 audio tokens (corresponding to 5 seconds) using greedy decoding. We report exact matches as the fraction of examples for which the generated audio tokens and source audio tokens are identical over the whole sequence. In addition, we report partial matches with the fraction of examples for which the generated and source sequences have at least $80\%$ of the audio tokens matching. The Figure can be seen in the attached rebuttal PDF file (Figure 2). As can be seen, the exact matches and partial matches are almost zero under all evaluated settings.
**_Regarding the number of codebooks (question 1)_:** During early experiments, we compared several numbers of codebooks and codebook sizes. We noticed that larger codebooks (e.g. 2048) would improve quality but not increase it beyond that point (we tested 4096 and 8192). Early experiments indicated that using more codebooks (e.g. 8) was detrimental to MusicGen models with 1.5B parameters. However, as depicted in Table 1 in the Rebuttal Document, we show that we can extend to 8 codebooks to stereo data, using 4 codebooks for the left channel, and 4 codebooks for the right channel. In that case, we noticed no regression compared with the 4 codebooks model (e.g. downmixing the output to mono to compare to the previously trained mono models), while the model was clearly able to model the stereo image of the generated music extracted, as measured by subjective evaluations.
**_Regarding samples for different text encoders (question 2)_:** As per the reviewer’s request, we shared samples for various text encoders. Specifically, we share samples for T5, Flan-T5, and CLAP using 12 codebooks (previously just CLAP, as done in MusicLM), CLAP using 24 codebooks, and CLAP without quantization. Interestingly, we observe that the quantization of the CLAP embedding as done in MusicLM is detrimental to the final score of the model. Samples were shared with the AC through an external link as per the NeurIPS guidelines. We additionally provide results for the different setups in the table below:
| Text Encoder | FAD | KL | Clap sc. | OVL. | REL. |
|-----------------------------|----------|----------|----------|----------------|----------------|
| T5 (default) | **3.12** | **1.29** | 0.31 | 84.9 ± 1.8 | **82.5** ± 1.3 |
| FLAN-T5 | 3.36 | 1.30 | 0.32 | **86.3** ± 1.8 | 80.8 ± 1.9 |
| CLAP (RVQ n_q=12 bins=1024) | 4.23 | 1.53 | 0.32 | 79.8 ± 1.8 | 77.3 ± 1.5 |
| CLAP (RVQ n_q=24 bins=1024) | 4.18 | 1.47 | 0.32 | 82.0 ± 1.4 | 76.7 ± 1.3 |
| CLAP (no quantizer) | 5.13 | 1.53 | **0.34** | 84.5 ± 1.2 | 80.2 ± 1. |
---
Rebuttal Comment 1.1:
Title: Post Rebuttal
Comment: Thank the authors for the responses. All of my questions are answered with details and well addressed. Now I agree that the paper is technically solid, and the contribution is clear. I will recommend accepting the paper. | Summary: This paper introduces MUSICGEN, an approach for generating music conditioned on either text or melody representation. MUSICGEN consists of a single-stage transformer language model (LM) augmented with efficient token interleaving patterns, eliminating the requirement of employing multiple cascaded models. The authors extensively evaluate the proposed approach through a combination of automatic evaluations and human studies, demonstrating its superiority over the evaluated baselines on a widely-used text-to-music benchmark.
Strengths: 1. The experimental setup and results presented in the paper are robust and persuasive, providing strong evidence for the proposed approach.
2. The paper includes a detailed and comprehensive discussion of the patterns related to the multi-codebook technique, enriching the understanding of this aspect of the research.
3. The demos in supplementary materials are impressive.
Weaknesses: 1. The novelty of the proposed approach is somewhat limited, and the motivation for the research is not clearly articulated. Additionally, certain design choices in the methodology lack explanation, necessitating further elaboration and clarification.
2. Some notations in the formulas are confusing, such as the usage of $\widetilde{U}$ in Equation (2) and $\hat{p}$ in Line 85. These notations should be introduced and defined more explicitly to avoid confusion.
3. Attention to writing details is required. For instance, when referencing equations in the main text, only the number is provided without explicitly mentioning “Equation” (e.g., Line 89, Line 101).
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: Despite having some concerns regarding the motivation and novelty, I am impressed by the thoroughness of the experiments. If the authors can further improve the writing quality, I am willing to give a higher score.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 2 fair
Contribution: 2 fair
Limitations: NA
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **_Regarding novelty and motivation:_** The novelty and motivation of the proposed method (MusicGen) are designing a simple and efficient auto-regressive model to perform text-to-music generation. Unlike prior work, which consists of a cascade of models using either super-resolution or semantic tokens, we present a single-stage language model through an efficient codebook interleaving strategy. Additionally, we present a single model to perform both text and melody-conditioned generation and demonstrate that the generated audio is coherent with the provided melody and faithful to the text-conditioning information.
Following the reviewer's concern, we additionally include an extension of MusicGen to stereophonic music generation, making it the first model to generate stereophonic music conditioned on textual descriptions. We encode separately the left and right channels using the same EnCodec model and use a specifically designed codebook interleaving pattern depicted in Figure 1 of the rebuttal document. OVL. and REL. scores can be seen below (and inside the attached PDF file), samples were shared with the AC through an external link as per the NeurIPS guidelines. Due to all of the above, we believe our work is novel and the contributions are important and would be valuable to the community.
### Stereo Model Human Study.
| Cb. Pattern | Stereo? | OVL. | REL. |
|----------------------|---------|----------------|----------------|
| mono delay | ✗ | 85.0 ± 1.6 | **80.6** ± 1.2 |
| stereo partial delay | ✗* | 84.5 ± 1.8 | 79.4 ± 1.2 |
| stereo partial delay | ✓ | **86.7** ± 1.1 | **80.4** ± 1.1 |
| stereo delay | ✓ | 85.5 ± 1.2 | 78.3 ± 1.2 |
*: downmixed to mono
**_Regarding notations and writing quality_:** We thank the reviewer for their feedback and suggestions. We will fix all writing issues raised by the reviewer, including Equation references and method notations.
When introducing $\tilde{U}$, we clarified as “Let us build a second sequence of random variables $\tilde{U}$ using the auto-regressive density $p$, e.g. we define recursively $\tilde{U}_0 = 0$ and for all t > 0, …
We introduce $\hat{p}$ as an estimate of $p$ using a deep learning model. We clarified “This means that if we can fit a perfect estimate $\hat{p}$ of $p$ with a deep learning model, then we can fit exactly the distribution of $U$.” | Rebuttal 1:
Rebuttal: We would like to thank the reviewers for their detailed reviews and valuable feedback. We are happy the reviewers found our experimental setup and results robust and persuasive. We are also glad the reviewers found our method to have both strong performance and efficient modeling. We address each of the reviewers' questions and concerns in a separate comment below.
Pdf: /pdf/844e0ce13ee86ccb2cfa3c899e4670a39318a7c0.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Fine-grained Expressivity of Graph Neural Networks | Accept (poster) | Summary: With graphon theory, this work quantifies which distance MPNNs induce and thus provides a deeper understanding of MPNNs’ capacity to capture graph structure, precisely determining when they can and when they cannot assign similar and dissimilar vectorial representations to graphs.
Strengths: 1. Strong universality results on graphon.
2. The iterated degree measure is intuitive and insightful.
Weaknesses: 1. Graphon focus on graph limit. While in real world settings like graph classfication, the graph is small and MPNN still lacks expressivity.
2. Only the sum aggregation is analyzed. While in application, max and mean aggregation are also important.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Can you generalize the theory to max and mean aggregation?
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their fair and constructive review.
> Graphon focus on graph limit. While in real world settings like graph classfication, the graph is small and MPNN still lacks expressivity.
You are correct that graphons are used in the GNN literature via large graphs. These works use graphons to formulate a generative model of the data. That is, graphs in the dataset are seen as sampled from some (usually small) set of graphons. In that approach, you need large graphs for the asymptotics to work. However, our work does not fall under this category, and does not define any generative model of graphs. In general, graphons are not limits of only large graphs. Any graph is a graphon, so graphons are also “limits” of small graphs. The space of graphs is a dense subset of the space of all graphons. The reason to consider graphon analysis is that we want to work with a compact metric space for results like the Stone–Weierstrass theorem to work. The space of graphons is the completion of the space of graphs to a compact metric space. To conclude, any probability distribution of graphs, including distributions restricted to small graphs, is a valid probability distribution of graphons. In the camera-ready version of the paper, we will clarify this point and the different philosophical approach from other GNN papers that use graphon analysis.
> Only the sum aggregation is analyzed. While in application, max and mean aggregation are also important.
> Can you generalize the theory to max and mean aggregation?
Good question. Currently, the 1-WL paradigm of summing over the neighbors is crucial in our proofs, and extending our theory to max/mean aggregation functions is not straightforward. We are aiming to explore this in future work.
---
Rebuttal Comment 1.1:
Comment: Thank you for the detailed reply. I keep my score to 7. | Summary: This work aims at a fine-gined metric to characterize representation differences from MPNNs. Specifically, they first generalize MPNN to iterative degree measures and prove that Prokhorov metric and unbalanced Wasserstein metric can be used to bound the node/graph representation difference. This relation is validated in their experiments on random graphs.
Strengths: - [significance] This work builds a connection between the GNN representations and the graph similarity, and makes a solid theoretical contribution to understand GNN expressivity.
- Empirical results well support the theory.
Weaknesses: - The motivation and the conclusion of untrained GNNs experiments are not very clear to me. How is "an untrained GNN can outperform trained GNN" related to the previous conclusions and what message does this part try to convey?
- Besides, TUDataset could be too small to faithfully evaluate the performance. Results on other datasets such as ZINC or OGB may further improve the evaluation.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: - Could you provide an intuitive explanation for Theorem 4, the universal theorem? Does it essentially mean that MPNNs have the same separation power as 1-WL on graphons?
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their fair and constructive review.
> The motivation and the conclusion of untrained GNNs experiments are not very clear to me. How is "an untrained GNN can outperform trained GNN" related to the previous conclusions and what message does this part try to convey?
Our theory states: two graphs are close in our graph distance if and only if all L-layer C-Lipschitz MPNN embeddings are close. We want to apply it for the context of graph classification (Q2), which translates to: two graphs are in the same class if and only if all MPNN embeddings are close --- this includes both trained and untrained embeddings. This thus motivates us to examine the effectiveness of untrained MPNN embeddings. Moreover, in practice, we cannot compute *all* MPNN embeddings, so we seek to test how well using a subset of them as an approximation (to graph distance) for graph classification. Our experiments demonstrate that using a subset of untrained MPNN embeddings does achieve competitive performance, illustrating the utility of our theoretical results.
We will clarify better in the camera ready version the motivation behind the experiments. The theory says that we need to run over all possible MPNNs, and take the largest distance in the output space. In our experiments, we instead run over a finite set of random MPNNs. We indeed did not provide any “Monte Carlo” theory that relates the full MPNN space to the finite sample, and the theory in previous sections should be seen as heuristically motivating the experiments, not rigorously justifying them. We will clarify this in the camera ready version.
> Besides, TUDataset could be too small to faithfully evaluate the performance. Results on other datasets such as ZINC or OGB may further improve the evaluation.
Good point, we will try to add them in the camera ready.
> Could you provide an intuitive explanation for Theorem 4, the universal theorem? Does it essentially mean that MPNNs have the same separation power as 1-WL on graphons?
Intuitively, it says that every continuous function on graphons that is invariant under 1-WL colors can be approximated by an MPNN. This is in some sense a “continuous” or “metric” (and hence stronger) variant of the statement that MPNNs and 1-WL have the same separation power on graphons. | Summary: The paper considers the continuous variant of the 1-WL test and leverages it to characterize the expressive power of MPNNs on graphons.
The authors show that if two graphons have similar MPNN outputs then they are close in their metric, extending the existing result proving the opposite implication (graphons similar in their metric have similar MPNN outputs).
They establish a connection between the continuous variant of the 1-WL test and MPNNs to tree distance and tree homomorphism densities, where equivalences were only known to hold on a discrete level. Empirically, they show that untrained MPNNs (paired with a trained final classifier) obtain competitive performances on graph-level tasks. Finally, they experimentally compare different models in their ability to preserve graph distances.
Strengths: 1. The paper refines existing results on metrics and MPNNs outputs.
2. The paper aims at closing the gap between results that were known to hold on a discrete level (where graphs are exactly isomorphic), but that were under explored at the continuous level.
Weaknesses: 1. The paper is hard to read, as it introduces a lot of existing concepts and builds on them. Maybe it is inevitable due to intrinsic theoretical nature of the paper. However, I believe some extra effort is needed to remind the reader that a concept or theorem that is going to be introduced is, in simpler words, what has been presented in the introduction as a contribution.
2. I am unsure of the implications of this work. The theory is beautiful, but I am left unsure of what we can do with this gained knowledge. I don't see how this can help experimentally, or in real world cases.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: 1. Can you discuss the implications of this work? While I understand that the theoretical claims hold, what can we do knowing that they hold?
2. Similarly to the above question, what is the take-home message of the experimental evaluation? I am unsure of the relevance of the questions you set to answer.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: NA
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their fair and constructive review.
> The paper is hard to read, as it introduces a lot of existing concepts and builds on them. Maybe it is inevitable due to intrinsic theoretical nature of the paper. However, I believe some extra effort is needed to remind the reader that a concept or theorem that is going to be introduced is, in simpler words, what has been presented in the introduction as a contribution.
Thank you, we agree, we will use the additional page for the camera ready, to add some intuitions and make it more accessible to the wider GNN community.
1. We will define and discuss tree distance in the main paper instead of the appendix.
2. In *Message-passing graph neural networks* we will add a more extensive discussion about the relation between the formulation of MPNNs on IDMs and standard MPNNs on graphons/graphs. We agree that seeing the connection between the two formulation is critical for the understanding of our paper.
3. We will extend some of the formulas, opting for clearer longer explicit formulas rather than shorter implicit ones. For example, the formulas in line 253 can be replaced by explicit formulations.
We kindly ask you to provide additional suggestions on specific parts that were hard to understand and should be improved.
> I am unsure of the implications of this work. The theory is beautiful, but I am left unsure of what we can do with this gained knowledge. I don't see how this can help experimentally, or in real world cases.
> Can you discuss the implications of this work? While I understand that the theoretical claims hold, what can we do knowing that they hold?
We mainly view our work as a theoretical contribution, leading to a better understanding of what kind of functions GNNs express, potentially leading to a better understanding of their predictive performance. We think that our work is a necessary first step of defining the desired notion of graph similarity for GNNs, which then enables future work that is focused on better understanding these notions and the expressivity of GNNs.
One main practical implication that the theory hints at is that we can use untrained MPNNs to solve various graph machine learning problems. In the camera-ready version, we will add a section discussing this.
> Similarly to the above question, what is the take-home message of the experimental evaluation? I am unsure of the relevance of the questions you set to answer.
Our experimental evaluation intends to show (Q1) how MPNN embedding Euclidean distance correlates with our proposed graph metrics, which illustrates Corollary 5; (Q2) how useful are untrained MPNN embedding to discriminate graphs, since our main results state that two graphs are close in our graph metrics if and only if all $L$-layer $C$-Lipschitz MPNN embeddings are close, which includes both trained and untrained MPNNs. Our experiments demonstrate that using a subset of untrained MPNN embeddings does achieve competitive performance, illustrating the utility of our theoretical results.
An important implication for practitioners is to use MPNN embedding (distance) as an efficient lower bound to compute distances of large graphs. More precisely, the time complexity of computing $\delta_{\mathsf{W}, h}$ is $\mathcal{O}(h \cdot n^5 \cdot \log n)$ (the same order as the WL distance in [26], even worse for $\delta_{\mathsf{P}, h}$). In contrast, the time complexity of computing $h$-layer, $d$-dimensional MPNN embedding distance (with $\varphi_t$, $\psi$ chosen as the composition of linear maps and pointwise nonlinearities) is $\mathcal{O}(h · n · |E(G)| + n · d)$, which is massively cheaper, especially for large, sparse graphs that are ubiquitous in real-world applications.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for their reply. However, I would still argue that the implications of this work are hard to see, and consequently the experimental section is hardly relevant. However, I understand the authors' claim that this represents a first step, so potentially future work will have broader impact. I will keep my score. | Summary: The paper studies the classes of MPNNs on graphons, ultimately showing that the MPNN representations are sufficiently close (up to constants depending on Lipschitz regularity and layers) _if and only if_ the graphons are close according to several metric distances, mainly the Prokhorov metric and the unbalanced Wasserstein metric.
Strengths: - The paper addresses a direction of research which is quite relevant, namely studying the response of GNNs and their dependence on properties of the input. In fact, the widely studied and adopted WL test is often a blunt yardstick for measuring the sensitivity of GNNs and investigating their dependence on metric functions defined on the space of graphs is worth looking into.
- The exposition of the paper and overview of the related works is well executed. The setup is clear.
- On a mathematical level, the works extends the results of [26] in several non-trivial ways and expands the efforts to understand GNNs through graphons in meaningful directions.
Weaknesses: - The fact that MPNNs exhibit a Lipschitz property compared to distances on graphons does not seem surprising to me (i.e. Lemma 3) given their regularity. The converse statement (Theorem 6) is perhaps less obvious, although I fail to see how can this be used in practice? In fact, it is not the qualitative $\epsilon-\delta$ statements that are that interesting in my opinion, but rather the quantitative bounds which seem though to be lacking any significant insight? For example in Lemma 3 the constant depends on the fact that the features "have to" be bounded on a compact metric space, which is ultimately not enlightening. More generally, it would be interesting not to prove some Lipschitz property of MPNNs, but perhaps how some more transparent properties of the graph structure translate _quantitatively_ into the Lipschitz bounds.
- The analysis focuses on the case where graph(ons) are taken without features, which can be limiting although this setup is often conventional in the literature on expressivity of GNNs.
- There is an impractical cost for computing the distances which should be explicitly reported in the main text -- as it stands, mentioning the polynomial running time in Theorem 2 is a little too vague. While this is probably minor compared to the other points, it also contributes to reducing the impact of the submission.
- The experimental part is a little confusing. First, experiments on untrained MPNNs should be much better motivated, as it stands one can mainly guess what are you are trying to accomplish here. Second, validating that MPNNs can separate better with increased hidden dimension, is not surprising; besides, where is the role of the hidden dimension been discussed before? The fact that you can be competitive on some TUdataset "without" training over the layers is again not that surprising. These datasets are quite sensitive to hyperparameter tuning and I suspect that even removing the weights from the layer altogether and only train for the encoder and or decoder would be quite competitive. Maybe using different datasets -- more structural ones -- such as ZINC could be a little more indicative for a comparison.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: - As far as I understand, extending these type of results to GNNs that do not follow the 1-WL paradigm of summing over the neighbours is possibly non-trivial right? I think that it would be interesting in general to see how quantitative $\delta$-bounds depend on different paradigms for using the graph information (from structural encoding, to Graph-Transformers).
- I find that restricting to the feature-less case is a little detrimental to the overall message and I wonder in fact how challenging is extending this formalism to graphons with signals? I am not expecting a revision but am curious to hear comments here to gauge if this direction is actually pursuable or not.
- In the experiments you claim that your results support Corollary 4? What is Corollary 4 and where in the manuscript you have mentioned the role of the hidden dimension? Besides, is it in general really surprising that MPNNs with more hidden dimension may be better at separating graphs?
I think the paper has technical merits despite some mild concerns on the overall impact to the broader GNN community, and the score reflects this. The results are, on a qualitative level, intuitive, while on a quantitative level it is hard to extract some insight. The experimental section needs some revision as per my previous comments ---evaluating over a more structural-oriented task such as ZINC is perhaps a plus but not a must.
Disclaimer: I have not checked the mathematical proofs in the appendix.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 2 fair
Limitations: Yes, limitations have been discussed and no societal impact can be foreseen.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their fair and constructive review.
> The fact that MPNNs exhibit a Lipschitz property compared to distances on graphons does not seem surprising to me (i.e. Lemma 3) given their regularity. The converse statement (Theorem 6) is perhaps less obvious, although I fail to see how can this be used in practice? [...]
Good point. Proving a quantitative version of Theorem 6 is, in fact highly non-trivial, cf. the very involved proof of the "Inverse Counting Lemma" in the book of Lovász [78]. We mainly view our work as a theoretical contribution establishing the first step of defining the appropriate notions. Lemma 3 and Theorem 6 show that our definitions are applicable in this sense. Then, proving quantitate bounds is the logical next step in this line of work. Additionally, we think that stating that the constant in Lemma 3 depends on the fact that the features "have to" be bounded on a compact metric space undermines the actual statement of Lemma 3: this is rather a technical detail that stems from the fact that graphons are normalized, but our MPNN functions are not. The vital part of the constants are the Lipschitz constants of the individual functions.
> The analysis focuses on the case where graph(ons) are taken without features, which can be limiting although this setup is often conventional in the literature on expressivity of GNNs.
> I find that restricting to the feature-less case is a little detrimental to the overall message and I wonder in fact how challenging is extending this formalism to graphons with signals? [...]
That is an interesting question that fell victim to the page limit. In the camera-ready version, we will clarify better that the analysis is restricted to graphons without signals and add a remark on how this can be extended to graphons with signals.
Here is what one has to adapt: Fix some compact metric space $K$ and consider graphons $W$ with a measurable signal function $\\ell \\colon [0,1] \\to K$. Replace the one-point space $\mathbb{M}_0$ by $K$ and adapt the definition of IDMs to include the old color in the new color (see the original definition of Grebík and Rocha [52]; including the old color in the new color becomes necessary also for 1-WL when the initial coloring is not constant since, otherwise, the second coloring would not include the initial coloring and so on).
Then, the IDM spaces $\\mathbb{M}_{h}$ are still compact metrizable.
Modify the 1-WL for graphons by setting $\\mathsf{i}_{(W,\\ell),0} \coloneqq \\ell$. In an MPNN model, $\\varphi_0$ is now a Lipschitz function $K \\to \\mathbb{R}^{d_0}$. Then, the universality result still holds since Lipschitz functions on $K$ are dense in $C(K)$. Finally, adapt the metrics by using the metric of the compact metric space $K$.
Since the definitions of IDMs and our distances become more complicated and less intuitive, we did not directly include this in the main body. Moreover, homomorphisms and the tree distance do not have meaningful definitions for graphs with signals (as far as we are aware). Hence, we stated our main result for graphons without signals.
> There is an impractical cost for computing the distances which should be explicitly reported in the main text -- as it stands, mentioning the polynomial running time in Theorem 2 is a little too vague. [...]
Good point; we will make sure to point out the upper bounds, which we derived in the appendix, in the main body of the camera-ready version ($\mathcal{O}(h \cdot n^5 \cdot \log n) \text{ for } \delta_{\mathsf{W},h},\, \mathcal{O}(h \cdot n^7) \text{ for } \delta_{\mathsf{P},h}$). Moreover, we will stress that we do not propose computing the metrics in practice as a computational tool. The paper aims to hint that MPNNs can replace these metrics. The metrics are meaningful but hard to compute. On the other hand, MPNNs are mainly seen as black-box models but are easy to compute. The paper shows that *practical* MPNNs have the separation power of the *theoretical* metrics.
> The experimental part is a little confusing. [...]
We will clarify better in the camera-ready version the motivation behind the experiments. The theory says that we must run over all possible MPNNs, and take the largest distance in the output space. In our experiments, we instead run over a finite set of random MPNNs. We did not provide any “Monte Carlo” theory that relates the full MPNN space to the finite sample, and the theory in previous sections should be seen as heuristically motivating the experiments, not rigorously justifying them. We will clarify this in the camera-ready version.
> As far as I understand, extending these type of results to GNNs that do not follow the 1-WL paradigm of summing over the neighbours is possibly non-trivial right? [...]
These are interesting directions that we aim at exploring in future work. Currently, the 1-WL paradigm of summing over the neighbors is crucial in our proofs.
> In the experiments you claim that your results support Corollary 4? What is Corollary 4 and where in the manuscript you have mentioned the role of the hidden dimension? Besides, is it in general really surprising that MPNNs with more hidden dimension may be better at separating graphs?
We apologize for the typo and clarify that “Corollary 4” refers to Corollary 5, which states that two graph(on)s are close in our distance if and only if all their MPNN embeddings (of $L$-layer, $\psi$-Lipschitz) are close in the Euclidean metric. The wording “all” can be interpreted as using infinitely-many MPNNs, or collecting them into one MPNN with infinite hidden dimension size. In the experiment (Q2), we investigate the performance of increasing hidden size and observe that the performance of untrained MPNNs increases with hidden size, supporting our theory.
---
Rebuttal Comment 1.1:
Comment: Thanks for the rebuttal, for clarifying how to extend the analysis to account for features and for commenting on the intended goals in the experimental section.
Once again, I think this is a nice work -- and considered the other reviewers' scores I am confident it will be up for acceptance -- however my reservations about the broader implications remain. Selling it as a continuous version of 1-WL test is definitely appealing, but I think we cannot relate to the impact that the original work had in the discrete case.
Some of the theoretical results that might be more appealing (quantitative statements / going beyond architectures bounded by the 1-WL test which nowadays are more and more only belonging to the academic world with the exception of node classification tasks over a single very large graph) are left for future work. In general, from a practitioner's side, I really can't see how this work can help settle some problems in the community (generalization, how properties of the training set affect the dynamics, how to design GNNs meaningfully more powerful) although I recognize that, potentially, this could be a first step towards achieving some of these goals.
I still feel that the experimental section is a little weak, but this to me is super marginal considered the theoretical nature of the work.
I also agree with another reviewer's observation about the choice of the layer-wise normalization not being conventional, this should be further emphasized in the paper.
Having said that, I will maintain my score; thank you again for the rebuttal. | Rebuttal 1:
Rebuttal: We thank the reviewers for their fair and constructive reviews and appreciate that they recognize that we present a beautiful theory for graph similarity. Combined with a novel generalization of message-passing graph neural networks (MPNNs) to graphons, our theory allows us to prove that graph(on)s are close in a continuous variant of the 1-WL if and only if their MPNN outputs are close, an equivalence that was only known to hold in the discrete case (where graphs are either distinguished or not).
We mainly view our work as a theoretical contribution, leading to a better understanding of what kind of functions MPNNs express, potentially leading to a better understanding of their predictive performance. We further think that our work is a necessary first step of defining the desired notion of graph similarity for MPNNs, which then enables future work that is focused on better understanding these notions and the expressivity of MPNNs.
To illustrate our theory, we empirically investigate the relationship between MPNN embedding Euclidean distances and our proposed graph distances. Our experiments show that untrained MPNNs can be surprisingly effective, as our theory predicts, as long as we use **many** of them. Ideally, we would use **all** to preserve the graph distance (as our theoretical result states), but this is not possible of course. Our experiment shows that using enough of them (measured by the hidden size) suffices for downstream classification. Moreover, one can view MPNN embedding distances as an efficient lower bound for our graph distances since the time complexity of computing $h$-layer, $d$-dimensional MPNN embedding distances is massively cheaper than the time complexity of computing our distances $\delta_{\mathsf{W},h}$ and $\delta_{\mathsf{P},h}$, which is still polynomial but with an impractical exponent.
Pdf: /pdf/75e6939d7a48ba70fea7007961b8dd17d3b6d830.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: The authors propose a novel way to generalize the expressivity of graph neural networks (and other general message passing algorithms) to the graphon case.
Furthermore, they identify metrics on graphons (and, consequently graphs) which allow to bound the distance of any MPNN representation of the graphons.
Strengths: The paper provides an important step towards a more realistic analysis of MPNN expressivity.
This paper goes beyond both common approaches to measure expressivity of MPNNs, presenting an epsilon delta result for two metrics on graph(on)s and MPNNs with Lipschitz constant and fixed number of layers.
This properly generalizes the discrete setting and has potentially high impact sparking further research.
Weaknesses: It seems to me that the definition of GNNs in Line 212 1/2 is a rather severe deviation from most MPNN formulations, as it requires activations to grow linearly with the graph size to offset the normalization $\frac{1}{|V(G)|}$. This seems to be generally only possible for dense graphs when e.g. a fixed set of (one hot encoded) categorical labels is present.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: I wonder if you can give some insights regarding the issue mentioned above?
Also, it is not clear to me how the discussion of Q2 in the experiments is connected to the theory in the previous sections. I might have missed the connection, but to me it is unintuitive why the Lipschitz-arguments should imply that larger $L$ should result in more similar outputs.
Minor Issues:
l 299: 'in an MPNNs'
p7 and p8 mention in relatively close distance two 'elegant' proofs. That struck me as a bit odd.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 4 excellent
Limitations: The authors discuss technical limitations such as the current restriction to discretely labeled graphs and 1-WL.
I don't see immediate negative societal impact of this work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their fair and constructive review.
> It seems to me that the definition of GNNs in Line 212 1/2 is a rather severe deviation from most MPNN formulations, as it requires activations to grow linearly with the graph size to offset the normalization. This seems to be generally only possible for dense graphs when e.g. a fixed set of (one hot encoded) categorical labels is present.
> I wonder if you can give some insights regarding the issue mentioned above?
The experiments show that normalized sum aggregation performs comparably to sum aggregation, and better than average aggregation. We have included the average aggregation experiments in the new Table 5 in the global response PDF:
* Table 1 (main text) studies MPNNs with sum aggregation and $1/|V(G)|$ normalization, corresponding to our theory
* Table 4 (appendix) studies MPNNs with sum aggregation without $1/|V(G)|$ normalization
* new Table 5 (global response PDF) studies MPNNs with mean aggregation without $1/|V(G)|$ normalization
All of above use the same MPNN backbone ($3$-layer, $512$-hidden-size) and experimental setup
However, if you have a dataset of graphs with sizes that range from very small to very large, and the large graphs are sparse, then normalized sum aggregation is not appropriate. Luckily, most datasets consist of graphs of sizes that do not vary too much. Normalized sum aggregation is appropriate for such datasets, even if the graphs are sparse, as the normalization can be compensated by larger weights. We will add a comment about this in the camera ready version, and include experiments with average aggregation for comparison.
On the theory side, the normalization by the number of vertices stems from viewing graphs as graphons. Requiring activations to grow linearly with the graph size makes sense in this context. For a very large graph, a small individual part of the graph does not play a significant role. More generally, if the size of our graphs grows towards infinity, then the importance of a set of fixed size goes towards zero. The activations have to grow with the graph size to identify these differences between graphs still. Hence, the Lipschitz constant we defined for an MPNN model has to grow to identify these differences. This is also what our theoretical result states: to guarantee $\varepsilon$-similarity of graph(on)s in our distances, we need similarity of MPNN models up to some Lipschitz constant $C$, where $C$ goes towards infinity as $\varepsilon$ goes to zero.
Moreover, when taking an activation function from our setting to the usual, non-normalized setting, the normalization would have to become part of the function (to yield the same output), which would offset the linear growth.
> Also, it is not clear to me how the discussion of Q2 in the experiments is connected to the theory in the previous sections. I might have missed the connection, but to me it is unintuitive why the Lipschitz-arguments should imply that larger $L$ should result in more similar outputs.
Intuitively, you should equate the depth $L$ with the number of steps in 1-WL. With more layers you can separate graphs that can only be separated with more steps of 1-WL. Hence, larger $L$ can separate more graphs.
Our theory states: two graphs are close in our graph distance if and only if all $L$-layer $C$-Lipschitz MPNN embeddings are close. We want to apply it for the context of graph classification (Q2), which translates to: two graphs are in the same class if and only if all MPNN embeddings are close—this includes both trained and untrained embeddings. This thus motivates us to examine the effectiveness of untrained MPNN embeddings. Moreover, in practice, we cannot compute **all** MPNN embeddings, so we seek to test how well using a subset of them as an approximation (to graph distance) for graph classification. Our experiments demonstrate that using a subset of untrained MPNN embeddings does achieve competitive performance, illustrating the utility of our theoretical results.
---
Rebuttal Comment 1.1:
Comment: Thank you for your clarifications. My questions have been answered. I agree that mentioning that typical graph datasets have graphs of similar size, for which the normalization indeed should not be an issue and cautioning the readers regarding the case of varying graph sizes. | null | null | null | null | null | null |
Domain Re-Modulation for Few-Shot Generative Domain Adaptation | Accept (poster) | Summary: This work studies generative domain adaptation (GDA), on the StyleGAN2 architecture. Methods in this setting are commonly evaluated in terms of quality, diversity, and cross-domain consistency. This work claims to also be the first to explore the abilities of memory (“retain knowledge from previously learned domains”) and domain association (“integrate multiple learned domains and synthesize hybrid domains”). To tackle all these aspects, a method called DoRM is proposed. DoRM learns a new mapping network and affine transformations – M&A - (components of StyleGAN2) for each new domain. Hybrid domains are generated by interpolating the style codes of the original mapping and affine layers with the new ones.
Strengths: 1. The paper is clearly written and easy to follow.
2. The tasks studied in this paper are interesting and relevant.
3. The proposed method is intuitive and novel, and as demonstrated by some experiments it is effective in performing domain adaptation.
Weaknesses: The claimed contributions are incorrect, and the evaluation is severely lacking. Both issues stem from outright ignoring the role and contributions of the most relevant previous works – HyperDomainNet [1], DynaGAN [18], Domain Expansion [24]. Below I list several examples of severe problems with claims and evaluations.
All three works consider settings that cover the supposably new aspects of domain adaptation that are discussed in this paper. Both DynaGAN and HyperDomainNet rely on similar modulation techniques and achieve *”memory”* the same way. In Domain Expansion, the original domain is preserved on a dedicated subspace. Also, both DynaGAN and Domain Expansion discuss *”association”* in length (using the terms “domain interpolation” and “domain composition”). So, clearly, this work’s claims (lines 64, 247) of being the first to consider this setting are false.
The previous fact is even somewhat acknowledged in the Related Work section (lines 99-105). The three works are said to “fall short in integrating multiple domains”. However, this claim is not supported by any experiment.
Not only this one claim is not supported, but the authors choose to not compare their method with the three most relevant works - all of which have code on Github and all have been published well before the submission deadline (October 2022-January 2023). Instead, the method is compared to the more conventional setting of “single-domain, entire generator” adaptation, which is negligent at best. Comparing the generation of “hybrid” domains to CDC [25] which was not designed for this purpose (and published in 2021) and not to DynaGAN and Domain Expansion is unreasonable. Similarly, claiming superiority over CDC in terms of storage required is irrelevant. DoRM trains a mapping network (~6M params) for each new domain, while HyperDomainNet trains ~6K params, and Domain Expansion requires no additional weights whatsoever!
The inaccurate contribution claims and lack of comparison to relevant previous works is fatal as it prevents the community and practitioners from understanding the landscape of works in this field. Perhaps the paper was written around the end of 2022 and is simply not up to date. I urge the authors to consider significantly rewriting this paper before resubmitting or arXiving a preprint.
Additional issues:
1. Measuring consistency between domains using ArcFace – a face recognition network trained on faces -- makes little sense to me. For example, why would it output anything meaningful on sketches?
2. Missing citation to StyleSpace [Wu et al.] who introduced the $\mathcal{S}$ used in this paper (line 120).
Technical Quality: 1 poor
Clarity: 2 fair
Questions for Authors: 1. Please clarify in what way is the setting that is supposably suggested was not already covered by HyperDomainNet, DynaGAN and Domain Expansion?
2. Why did you not include comparisons to these methods despite acknowledging that they are highly similar and relevant?
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 1 poor
Presentation: 2 fair
Contribution: 2 fair
Limitations: One limitation is mentioned. However, several questions remain:
1. Can hybrids be produced by combining more than two domains?
2. Are the produced hybrids less faithful to each individual domain and to what extent?
Flag For Ethics Review: ['No ethics review needed.']
Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Q1. The claimed contributions are incorrect, and the evaluation is severely lacking**
We apologize for any oversight and plan to enhance the quality of our paper through the following steps:
**1. Thorough Comparison and Differentiation from Prior Works**. It has demonstrated in the [General Response: Comprehensive Comparison and Discernment from Pertinent Works [1] [2] [3]](https://openreview.net/forum?id=jown9RvYn7¬eId=xNZlQlaQNI)
**2. Clarification of First-to-Consider Claim.** We acknowledge that our manuscript has inaccurately represented our major contributions. In Section 2.1 of the manuscript, we have indeed recognized that the three mentioned methods [1][2][3] achieve comparable domain memory capabilities to our approach. However, we find that both HyperDomainNet [1] and DynaGAN [2] lack the capacity to realize "Domain Association" due to their reliance on scale modulation parameters in the StyleGAN style space. Additionally, these methods did not thoroughly explore the potential applications of "Domain Association" in their respective papers. It is essential to clarify that the "domain interpolation" technique in DynaGAN [2] significantly differs from our concept of "Domain Association." While "domain interpolation" facilitates a seamless transition between two target domains through vector interpolation, it falls short in creating a hybrid domain that simultaneously embodies the properties of multiple fundamental domains, as seen in our "sketch"+"Sunglasses" domain example. We acknowledge the "domain composition" aspect discussed in [3] aligns with our proposed "Domain Association." Regrettably, our initial review of the references was incomplete, and we overlooked the significance of [3]'s contribution to the domain association. To rectify this, we will revise the description of our main contributions and provide a more comprehensive analysis of related works.
**3. Comprehensive Evaluation of Methods.** In light of our paper's focus on "few-shot GDA", we conducted qualitative and quantitative comparisons using established criteria, that is using FID and Intra-LPIPS metrics on Sketch, Babies, and Sunglasses datasets. However, we found that none of [1,2,3] evaluated their approaches using this popular setting. Consequently, we categorize them as one-shot GDA methods and refrain from comparing them in 10-shot experiments. In our main paper, we utilize CDC as a baseline for hybrid domain generation and storage, showcasing the superiority of our remodulation technique across all five anticipated attributes of few-shot GDA. It's worth noting that the mentioned methods [1][2][3] lack consistent improvement compared to CDC-based approaches, particularly in terms of quality, diversity, and consistency. This raises the possibility that these methods may have traded core attributes to emphasize new ones, a conjecture supported by our additional rebuttal experiments (Fig. 6 of the [PDF](https://openreview.net/attachment?id=xNZlQlaQNI&name=pdf)). Additionally, we conduct experiments of one-shot GDA in Section A.4 of the Supplementary Materials, comparing our DoRM++ with [2] and demonstrating significant enhancements across anticipated GDA attributes. We abstain from using [1] as a baseline due to its use of the unofficial StyleGAN2 implementation, differing from the official one. [3] is also excluded as it only supports text-driven domain adaptation on their Github.
Furthermore, we offer a comprehensive evaluation of three methods: Our DoRM, [2], and [3] in this response. We didn't consider [1] as a baseline due to its implementation constraints. To address any confusion, we present detailed experiments on memory and domain association capabilities of these methods under 1-shot and 10-shot GDA (Fig 6 of the [PDF](https://openreview.net/attachment?id=xNZlQlaQNI&name=pdf)). The outcomes underscore our DoRM's superiority.
**Q2. Measuring consistency between domains using ArcFace**
Indeed, accurately measuring the performance of generation models, particularly those with few-shot training data, is a notorious challenge. ArcFace is **commonly** used in domain adaptation and has been employed to measure the consistency between different domains in previous works [1], as well as to design identity loss for ensuring consistency between synthesized and source-domain faces [2]. To provide a more comprehensive evaluation, we have conducted a user study in one-shot GDA. As indicated in Table 1 of the [PDF](https://openreview.net/attachment?id=xNZlQlaQNI&name=pdf), the users overwhelmingly favored our DoRM in all three aspects.
**Q3. Missing citation to StyleSpace**
The introduction of the $s$ used in our paper is attributed to StyleSpace and we will promptly include the citation of StyleSpace in revision.
**Q4. Can hybrids be produced by combining more than two domains?**
As depicted in Fig. 5 of the [PDF](https://openreview.net/attachment?id=xNZlQlaQNI&name=pdf), we have showcased an example of applying our method to create hybrid-domain images by activating the trained M&A modules of Baby, Sunglasses, and Sketch domains. This illustration demonstrates how easily our approach can be adapted to generate hybrids involving more than two domains, showcasing the versatility and potential of our approach.
**Q5. Are the produced hybrids less faithful to each individual domain and to what extent?**
While our proposed method has showcased remarkable advancements in hybrid domain generation, it is true that some minimal compromise to the fidelity of each individual domain can occur. For instance, in the hybrid domain "elsa"+"Sunglasses" of the Fig 6 in [PDF](https://openreview.net/attachment?id=xNZlQlaQNI&name=pdf), there is a subtle reduction in the features of the sunglasses domain. Regrettably, quantifying this distortion is complex. We will resolve it in the future.
---
Rebuttal Comment 1.1:
Title: Comment on rebuttal
Comment: Thanks for the rebuttal. I’m listing a few follow-up questions:
1. Can you please share the revised set of contributions, as you plan to present them in the paper?
2. Evaluation of hybrids - I agree that evaluating the fidelity of a hybrid domain with respect to each of the domains composing it is not trivial. Nevertheless, as this is one of the selling points of the paper, I would have expected the paper to include it. Domain Expansion includes such an experiment, for example. Also, until such an evaluation is performed, I suggest not to dismiss DynaGAN’s interpolation results. A midpoint in interpolation is clearly not a perfect hybrid but does include characteristics of both domains. It is not clear to me that the hybrids produced by this method would be superior.
3. Baselines -
* Can you please explain why HyperDomainNet using a third-party StyleGAN implementation is a reason to not compare with it? Especially given that this specific implementation was abundantly used by previous works.
* How were the Domain Expansion results in the attached Fig. 6 produced? I’m assuming it uses CLIP and text and thus is not exactly an apples-to-apples comparison. I think the best baseline would be applying CDC within the expansion framework. I acknowledge that this requires some additional coding, but from a practical standpoint, if the results of this baseline are better, I’m not sure what the contribution of this paper would be.
---
Reply to Comment 1.1.1:
Title: The response to follow-up questions_1
Comment: **1. Please share the revised set of contributions**
The key contributions of this work can be summarized as follows:
We present DoRM, an innovative generator architecture for few-shot generative domain adaptation, drawing inspiration from the learning mechanisms observed in human brains. DoRM stands out by not only producing high-quality, diverse, and consistently cross-domain images, but also integrating memory and domain association capabilities that remain relatively unexplored in the field. Notably, our approach is one of the very few that encompasses all five desired properties of GDA. Moreover, our method showcases superior performance across multiple dimensions compared to the existing similar work, highlighting its advanced contributions.
**2. Evaluation of hybrids**
We meticulously scrutinized the quantitative experiments conducted within the Domain Expansion framework, leading us to adopt a cosine similarity evaluation metric termed "domain similarity" ($Sim$) based on CLIP image encoder ($E_I$). This metric is employed to provide a quantitative assessment of the fidelity exhibited by hybrid domains. In detail, for given generative images in the hybrid domain "sketch-baby" ($I_{SB}$), we extract image features from the provided images ($E_I(I_{SB})$) as well as its corresponding target images ($E_I(I_{S})$ and $E_I(I_{B})$). Therefore, the "domain similarity" to "sktech" and "baby" domain are defined as $Sim1=cos(\overline{E_I(I_{SB})}, \overline{E_I(I_S)})$ and $Sim2=cos(\overline{E_I(I_{SB})}, \overline{E_I(I_B)})$, respectively.
Subsequently, we compute the cosine similarity for each case. The quantification of results from both the one-shot and 10-shot experiments is meticulously presented in Table 1 and Table 2, respectively. These outcomes distinctly illustrate the remarkable performance of our proposed DoRM in both single-domain and hybrid-domain generation. It is imperative to highlight a notable observation amidst these findings: a certain outlier exists. Specifically, the images generated by DynaGAN and HyperdomainNet within the "elsa-sunglasses" domain exhibit a remarkably high domain similarity with the "sunglasses" domain, while conversely displaying significantly lower domain similarity with the "elsa" domain. This phenomenon can be rationalized by the generated images closely resembling the "FFHQ" domain, as depicted in Figure 6 of the attached PDF in the rebuttal. Consequently, the domain similarity to the "FFHQ-sunglasses" domain also emerges as notably high. Drawing upon this observation, we emphasize the necessity for a comprehensive evaluation of hybrid-domain generation that seamlessly integrates both qualitative and quantitative results.
**Table 1. Domain similarity on one-shot GDA. (high is better)**
| Method | elsa | baby |sunglasses|elsa-baby|elsa-sunglasses|
| :-----:| :----: | :----: | :----: | :----: | :----: |
| DynaGAN |0.9165|0.7866|0.8882|0.6196/0.7832|0.6245/0.8467|
| DynaGAN-interpolation |-|-|-|0.6201/0.7836|0.6330/0.8546|
| HyperDomainNet |0.8740|0.7739|0.8589|0.7007/0.7041|0.6554/0.7882|
| Domain expansion |0.9075|0.9614|0.9339|0.7022/0.7762|0.7671/0.6177|
| Ours |0.9309|0.9814|0.9370|0.7173/0.7842|0.7734/0.6377|
**Table 2. Domain similarity on 10-shot GDA. (high is better)**
| Method | sketches | babies |sunglasses|sketches-babies|sketches-sunglasses|
| :-----:| :----: | :----: | :----: | :----: | :----: |
| Domain expansion |0.8958|0.9546|0.9094|0.7480/0.7112|0.7720/0.6735|
| Ours |0.9492|0.9780|0.9136|0.8809/0.7271|0.8057/0.6973|
**3. Why HyperDomainNet using a third-party StyleGAN implementation is a reason to not compare with it?**
The existence of distinct latent spaces within different pre-trained source models hinders a straightforward correspondence of identical source-generated images across these models. While one approach involves obtaining the corresponding hidden space vector through GAN inversion, the absence of inversion code and the inability to generate aligned images with our locally trained model impedes direct application to HyperDomainNet's released code. Moreover, we acknowledge the resemblance shared between DynaGAN and HyperDomainNet, both incorporating a non-linear combinability scale modulation parameter. Opting for either of these methods as a comparative benchmark can effectively underscore the inherent challenges ingrained within their respective frameworks. Fortunately, after a substantial time investment, we have successfully completed the code and generated both quantitative results (as exhibited in Table 1) and qualitative outcomes for HyperDomainNet. Regrettably, in accordance with the NIPS 2023 rebuttal policy, we are constrained from sharing result visualizations via links. We assure you that we will furnish the visual results of HyperDomainNet in the camera-ready version. | Summary: This paper focuses on the concept of few-shot generative domain adaptation. Drawing inspiration from the human memory mechanism, the authors introduce a novel approach called DoRM (Domain-Adaptive Mapping and Affine Modules) to adapt the generator to a new domain. By incorporating new mapping networks and affine modules into the frozen source generator, they aim to enhance its adaptability. Additionally, the paper introduces a consistency loss based on CLIP image features to ensure the preservation of domain-sharing attributes during adaptation. Notably, the authors explore the intriguing concept of domain association in generative models, which involves merging the generative abilities of multiple domain-specific generators to enable generation in a new domain. Through a combination of mapping networks and affine modules within a single generator, the proposed DoRM generator successfully achieves domain association and multi-domain generation concurrently. To validate the effectiveness of their approach, the authors conduct extensive qualitative and quantitative experiments.
Strengths: (1) This paper presents a novel task: domain association, which involves combining the generative abilities of different domain-specific generators to enable generation in a new domain. This topic represents an exciting avenue for further exploration in the field of generative models. Notably, Figure 4 demonstrates impressive synthesis quality in the hybrid domain, particularly in the baby-sketch domain.
(2) The proposed generator network structure, DoRM, is skillfully designed to facilitate domain association in a straightforward yet effective manner. By incorporating lightweight modules, the method achieves domain adaptation while preserving the generative abilities in the learned domains. This approach is both efficient in terms of storage and yields promising results.
(3) The paper is excellently written and presents its concepts in a clear and understandable manner, making it easy for readers to follow along with the proposed methodology.
(4) Through extensive qualitative and quantitative experiments, the paper demonstrates the superior performance of the proposed method compared to state-of-the-art techniques. Particularly impressive are the synthesis results achieved through domain association using the proposed method.
Weaknesses: Please view the Questions below.
Technical Quality: 4 excellent
Clarity: 3 good
Questions for Authors: (1) It is worth noting that the proposed generator structure bears similarities to the one introduced in [1]. This overlap may somewhat diminish the novelty of the proposed method. To enhance the clarity of differentiation, it would be beneficial if the authors provide a more comprehensive explanation highlighting the distinct features and advancements of their approach in comparison to [1].
(2) In addition to the proposed loss regularization, the authors have included supplementary regularizations, referred to as DoRM++, based on [2], in the supplementary material's one-shot experiments. Although the synthesis results of DoRM++ outperform those of [2], it is important to note that the proposed DoRM approach already addresses overfitting concerns by freezing the discriminator's backbone. Consequently, the inclusion of these supplementary loss components may be considered redundant. Therefore, it would be beneficial to conduct further qualitative and quantitative comparisons between DORM and DORM++ to gain a comprehensive understanding.
(3) Section 4.3 would greatly benefit from further elaboration on the training details and specific datasets employed in the domain association experiments. Providing more specific information in these areas would enhance the reproducibility and understanding of the experimental setup for readers.
(4) In recent times, the diffusion model has made remarkable strides in image generation, particularly in the area of few-shot image generation. Therefore, there is a need for further discussion regarding the application of the diffusion model in the context of few-shot Generative Domain Adaptation and domain associations.
[1] HyperDomainNet: Universal Domain Adaptation for Training Generative Adversarial Networks. NeurIPS’22
[2] Towards Diverse and Faithful One-shot Adaption of Generative Adversarial Networks. NIPS 2022
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 3 good
Contribution: 4 excellent
Limitations: Limitations are discussed in the paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Q1. It would be beneficial if the authors provide a more comprehensive explanation highlighting the distinct features and advancements of their approach in comparison to [1].**
Thank you for your valuable feedback regarding the discussion of related works. We fully understand the importance of a comprehensive comparison and analysis of existing methods, especially those that share similarities with our proposed approach. The thoroughly discuss of related works have been demonstrated in the [General Response: Comprehensive Comparison and Discernment from Pertinent Works [1] [2] [3]](https://openreview.net/forum?id=jown9RvYn7¬eId=xNZlQlaQNI)
**Q2. The paper should include experiments of DoRM++ in 10-shot GDA and DoRM in one-shot GDA, as this ablation experiment would help to demonstrate the advancement of the mentioned losses and the impact of data size on GDA tasks.**
We have diligently carried out additional experiments, as demonstrated in the Right part of Figure 4 on the [PDF](https://openreview.net/attachment?id=xNZlQlaQNI&name=pdf), to thoroughly explore the performance of DoRM++ in a 10-shot GDA scenario and that of DoRM in a one-shot GDA context. The results of these experiments have been thoughtfully analyzed and presented in our revised manuscript. From our findings, it is evident that both DoRM++ and DoRM exhibit strong performance in both few-shot and one-shot GDA scenarios. Notably, DoRM++ showcases enhanced cross-domain consistency compared to DoRM in the context of one-shot GDA.
**Q3. Section 4.3 would greatly benefit from further elaboration on the training details and specific datasets employed in the domain association experiments.**
Within the framework of our DoRM, a key element lies in the utilization of distinct re-modulation weights, as elaborated in Section 3.1, which are specifically tailored to individual target domains. To illustrate, let us consider the source domain as FFHQ. When steering towards relatively proximate target domains such as FFHQ-Baby or FFHQ-Sunglasses, which bear closer semblance to the source domain, we strategically set the re-modulation weight within the range of 0.004 to 0.05. This meticulous calibration has proven instrumental in attaining more optimal synthesis outcomes for these domains. Conversely, when embarking upon more disparate domains such as the sketch domain or other artistic domains like the works of Amedeo and Monet, which exhibit substantial gaps when compared to the source domain, we judiciously adjust the re-modulation weight to fall within the span of 0.05 to 0.2. This discerning adjustment has significantly contributed to the achievement of improved synthesis results for domains characterized by pronounced dissimilarity.
**Q4. There is a need for further discussion regarding the application of the diffusion model in the context of few-shot Generative Domain Adaptation and domain associations.**
We sincerely appreciate your insightful review of our work. Your observation regarding the advancements of the diffusion model in image generation, particularly in the domain of few-shot image generation, is highly valuable. We agree with your point on the significance of discussing the application of the diffusion model in the context of few-shot Generative Domain Adaptation (GDA) and domain associations. Indeed, the diffusion model has shown promising capabilities in acquiring the concept (style or content) of the target domain through few-shot images. Remarkable examples like Dreambooth or Text Inversion demonstrate how the diffusion model can discern the identity or style of the target domain from just one image, achieving impressive few-shot image generation. However, we recognize that the issue of cross-domain consistency remains a challenge in this context. Addressing this problem effectively holds great potential for the diffusion model in few-shot GDA. Moreover, the disentanglement of the text space in the diffusion model presents significant opportunities for domain association. Despite its potential, we agree with your observation that this aspect has not been thoroughly explored by the community. Further investigation into utilizing the diffusion model's disentangled text space for domain association can open new avenues and contribute to advancing the field.
We genuinely appreciate your valuable insights, which have shed light on important aspects for future research. Your feedback has inspired us to delve deeper into the potential applications and capabilities of the diffusion model in few-shot GDA and domain associations.
[1] HyperDomainNet: Universal Domain Adaptation for Generative Adversarial Networks. NIPS 2022
[2] DynaGAN: Dynamic Few-shot Adaptation of GANs to Multiple Domains. SIGGRAPH Asia 2022
[3] Domain Expansion of Image Generators. CVPR 2023
---
Rebuttal Comment 1.1:
Comment: Thank the author for their elaborated rebuttal. All my concerns are clarified. I would like to keep my rating positive.
---
Reply to Comment 1.1.1:
Title: Response to Reviewer 18N9
Comment: Thank you very much for your valuable comments. Your insights have significantly contributed to the improvement of our manuscript's quality. Best regards.
---
Rebuttal 2:
Comment: Dear Reviewer 18N9,
The author-reviewer discussion is closed on Aug 21st 1pm EDT, could you please read the rebuttal and give your final rating? Thanks so much!
Best,
AC | Summary: This paper proposes two advanced criteria for few-shot Generalized Domain Adaptation (GDA) inspired by the way human brains acquire knowledge in new domains: memory and domain association.
To fully realize the potential of few-shot GDA, an innovative generator structure called Domain Re-Modulation (DoRM) is introduced.
This structure freezes the source generator and incorporates new lightweight mapping and affine modules (M&A modules) to capture the attributes of the target domain during GDA, resulting in a linearly combinable domain shift in the style space.
By incorporating multiple M&A modules, the generator gains the ability to perform high-fidelity multi-domain and hybrid-domain generation.
Strengths: 1. This paper presents a novel exploration into the use of memory and domain association in few-shot GDA, which has not been previously explored. These advanced properties greatly reduce memory usage and simplify deployment, while also producing impressive results.
2. The proposed DoRM structure and similarity-based structure loss are innovative and advanced.
3. The figures in this paper effectively demonstrate the major contributions and the overall organization is clear and easy to follow.
4. The experiments conducted in this paper are robust and include both one-shot and 10-shot settings. Extensive testing demonstrates that our proposed method outperforms the previous state-of-the-art method in all five properties: quality, synthesis diversity, cross-domain consistency, memory, and domain association.
Weaknesses:
There are a few areas where this paper could be improved.
1. It would benefit from including some recent studies on one-shot image generation [1].
2. The paper should include experiments of DoRM++ in 10-shot GDA and DoRM in one-shot GDA, as this ablation experiment would help to demonstrate the advancement of the mentioned losses and the impact of data size on GDA tasks.
3. The paper lacks a future outlook, and it would be helpful to explore more possible improvement directions, especially as the effect of domain association on some datasets is not always satisfactory.
[1] StyO: Stylize Your Face in Only One-Shot
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: See weakness
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 3 good
Limitations: NA
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Q1. It would benefit from including some recent studies on one-shot image generation [1].**
Thank you for providing valuable feedback on our work. We sincerely appreciate your suggestion to include recent studies on the one-shot image generation. Staying up-to-date with the latest research in the field is crucial to ensuring the comprehensiveness of our work. We have thoroughly reviewed the referenced paper, and it is highly relevant to our research topic. The study introduces the diffusion model into one-shot GDA, offering valuable insights that can enrich our understanding of the domain. In light of this, we are committed to revising our paper to incorporate the relevant information and acknowledge the contributions made by the diffusion model in the context of one-shot GDA.
**Q2. The paper should include experiments of DoRM++ in 10-shot GDA and DoRM in one-shot GDA, as this ablation experiment would help to demonstrate the advancement of the mentioned losses and the impact of data size on GDA tasks.**
Thank you for your insightful suggestion regarding the inclusion of ablation experiments involving DoRM++ in 10-shot GDA and DoRM in one-shot GDA. We truly value your recognition of the potential insights that these experiments could offer in terms of assessing the efficacy of our proposed losses and comprehending the influence of data size on GDA tasks. In direct response to your valuable feedback, we have diligently carried out additional experiments, as demonstrated in the right part of Figure 4 in the [PDF](https://openreview.net/attachment?id=xNZlQlaQNI&name=pdf), to thoroughly explore the performance of DoRM++ in a 10-shot GDA scenario and that of DoRM in a one-shot GDA context. The results of these experiments have been thoughtfully analyzed and presented in our revised manuscript. From our findings, it is evident that both DoRM++ and DoRM exhibit strong performance in both few-shot and one-shot GDA scenarios. Notably, DoRM++ showcases enhanced cross-domain consistency compared to DoRM in the context of one-shot GDA. We greatly appreciate your guidance in enriching the empirical validation of our approach, and we are confident that these additional experiments further enhance the comprehensiveness of our research.
**Q3. The paper lacks a future outlook, and it would be helpful to explore more possible improvement directions, especially as the effect of domain association on some datasets is not always satisfactory.**
Thank you for your valuable feedback on our paper. We appreciate your suggestion to include a future outlook and explore potential improvement directions for our research. We also acknowledge the importance of addressing cases where the effect of domain association on certain datasets might not be entirely satisfactory. In response to your insightful comment, we plan to revise the paper to incorporate a dedicated section that outlines possible avenues for future research and improvements. This section will explore the limitations and challenges we encountered during the domain association process and propose potential solutions and directions for further investigation.
Some of the areas we intend to focus on include:
1. Advanced Domain Adaptation Techniques: We will investigate state-of-the-art domain adaptation methods and explore how integrating these techniques into our approach might enhance the performance of domain association, particularly on datasets where the effect is less satisfactory.
2. Data Augmentation Strategies: We will explore the effectiveness of different data augmentation approaches to improve the robustness of our model against variations in domain-specific characteristics.
3. Analyzing the application of the proposed method to 3D few-shot GDA and extending the DoRM to 3D few-shot GDA.
4. The current manuscripts only simply combines the M&A modules of different target domains and activate them at the same time to realize the domain association. To further improve the performance of the domain association, not only combining the trained target M&A modules but also employig a new M&A module and additional consistency loss is a better method to blend the target domains.
---
Rebuttal Comment 1.1:
Title: Response to the Rebuttal
Comment: Thank you for the detailed rebuttal and clarifications provided in response to my initial review. Your explanations have significantly addressed my concerns and have deepened my understanding of the paper's contributions. I appreciate your thoroughness and look forward to seeing the refined version of your work.
---
Reply to Comment 1.1.1:
Title: Rebuttal by Authors
Comment: Thank you for your thoughtful review and valuable feedback on our paper. We are pleased to hear that our detailed rebuttal and clarifications have effectively addressed your concerns and contributed to a deeper understanding of the contributions outlined in our work. Your recognition of our thoroughness is greatly appreciated. Once again, we would like to extend our gratitude for your time and efforts in reviewing our paper. Your input is immensely valuable, and we are looking forward to sharing the refined version of our work with you. | Summary: The paper introduces a novel approach for domain adaptation of StyleGAN2 called Domain Re-Modulation, which is a few-shot technique. The authors argue that this method draws inspiration from the workings of the human brain. To achieve the desired domain shift, the paper utilizes the stylespace of StyleGAN along with adversarial and consistency losses. Additionally, a structural similarity loss is incorporated, which is based on the auto-correlation map extracted from the CLIP encoder. The effectiveness of the proposed method is evaluated through both quantitative and qualitative analyses. Furthermore, the paper includes comparisons with other state-of-the-art StyleGAN-based methods, highlighting its advancements. The authors also conduct an ablation study to analyze the individual components employed in their approach.
Strengths: * The paper introduces the concept of hybrid domain generation, which involves combining the affine layers of StyleGAN to generate results from multiple domains using a single generator. The authors demonstrate that this approach yields consistent and generalizable outcomes across multiple domains.
* Notably, the method proves effective even in scenarios with limited training data, such as 1-shot training. By leveraging the StyleGAN2 backbone, the proposed technique achieves reliable and coherent generation outcomes.
* In terms of evaluation, the paper compares its approach with various StyleGAN-based methods. It provides both qualitative and quantitative results to assess the performance of the proposed method. Furthermore, the paper includes comparisons with related methods to highlight the advantages and advancements of the proposed domain generation approach.
Weaknesses: 1. The paper lacks substantial novelty as there exist other works such as StyleCLIP and 3DAvatarGAN that demonstrate the capability of performing image editing and few-shot domain adaptation using the "s" space of StyleGAN. Although the paper incorporates a CLIP-based loss, it fails to provide a comparative analysis with papers falling under the same category, such as StyleGAN-NADA and Mind the GAP HyperStyle. The absence of such comparisons and explanations regarding the differences between the methods makes it challenging to determine the uniqueness of the paper. To assess how the current method surpasses these works, it is essential to consider the individual components and techniques utilized in each approach. By conducting a thorough comparison, the paper could highlight the specific strengths of the proposed method that outperform other works. This would provide a clearer understanding of the paper's novelty and identify the components that excel in comparison to existing methods. Hence, it is imperative for the paper to address these concerns by including a detailed comparison that elucidates how the current method improves upon or surpasses existing approaches, specifically identifying the components that outperform these methods. How is the current method better than these works? Which component of the method outperforms these methods?
2. Why did the authors use 2D generators especially when 3D StyleGAN based generators are available trained on the same data. Besides the concerns above, it would be interesting to see how the method will perform in the 3D -GAN domain. What are some additional challenges in that domain?
3. How is the editability of the generator after the domain adaptation. Since there are no edits performed, it is difficult to assess if the latent space properties of the generator are preserved or it just overfits the given Styles.
There are other datasets like AFHQ Cats, Dogs, Cars that are tested for such few shot domain adaptation tasks in the StyleGAN domain. The method seems specific to the face domain. What about the results on these domains? How generalizable is the method to these domains?
4. The artistic domains are quite subjective. It is not fair to just evaluate quantitative metrics for such tasks. I would suggest conducting a user study. It would be better to embed real images in the generators and ask users about the similarity, consistency and identity preservation of the target images.
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: Refer to Weakness section
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 2 fair
Limitations: The authors discuss the ethics and limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Q1. The paper lacks substantial novelty**
We are committed to conducting a detailed and comprehensive comparative analysis that elucidates the specific advantages of our current method over existing works.
1. StyleCLIP only demonstrates the capability of performing image editing using the "s" space of StyleGAN, which is obvious and has been explored many times by previous approaches. 3DAvatarGAN is the first study of domain adaptation in 3D-GANs. However, 3DAvatarGAN adopts a pre-trained 2D target-domain generator to achieve the domain adaptation for 3D-GANs. The pre-trained 2D generator can generate various target domain images, thereby, it only demonstrate that using the "s" space of StyleGAN can acquire the domain adaptation, but not few-shot domain adaptation. We provide a novel insight to "s" space of stylegan and demonstrate that only one image is enough for domain adaptation through using the "s" space of StyleGAN.
2. Compared with StyleGAN-NADA and Mind the GAP, [1] and [3] are the latest to come up with more advanced methods. These methods [1][3] also use clip-based losses, and sufficient experiments have been done to show that their methods are superior to the mentioned baseline. Therefore, our manuscript only compares the proposed method with advanced baseline [1][3] to demonstrate the advancement. Furthermore, we systematically analysis and compare individual components and techniques utilized in our DoRM and DiFa [3] to showcase how our method surpasses these works. According to the Introduction of the manuscript, GDA needs three fundamental properties. To resolve it, DiFa [3] proposed two CLIP-based loss: global loss ($L_\text{global}$) and local loss ($L_\text{local}$) for realizing large diversity/cross-domain consistency and high quality, respectively. Differently, we adopt Similarity-based Structure Loss ($L_\text{ss}$) and adversarial loss ($L_\text{adv}$) for realizing large diversity/cross-domain consistency and high quality, respectively. As shown in Fig. 2 of the [PDF](https://openreview.net/attachment?id=xNZlQlaQNI&name=pdf), the $L_\text{local}$ in DiFa mainly focuses on textual features and failes to capture the complete features (e.g. the white background in sketches) in generative domain adaptation. In our DoRM, we employ adversarial loss which can fully capture the features of the target images.
**Q2. Why did the authors use 2D generators especially.**
Our decision to use 2D generators was primarily influenced by the maturity and development of the 2D few-shot GDA field, which has gained significant attention since 2021. While we acknowledge the potential benefits and advancements that 3D StyleGAN-based generators can offer, we opted to validate our approach in a more established and widely studied domain. Furthermore, we conducted some initial studies using the popular 3D-aware image generation method, EG3D, for one-shot GDA with FFHQ as the source domain and Sketch as the target domain. The results, as shown in the Fig. 3 of [PDF](https://openreview.net/attachment?id=xNZlQlaQNI&name=pdf), reveal that directly migrating the proposed EG3D to 3D one-shot domain adaptation poses challenges and might not be straightforward. Some potential challenges in the 3D-GAN domain include: Overfitting, Volumetric Data Representation, Spatial Artifacts, and Computational Complexity.
**Q3. How is the editability of the generator after the domain adaptation. The method seems specific to the face domain. How generalizable is the method to other domains?**
To address this concern, we have planned additional experiments and evaluations to investigate the editability and latent interpolation of the generator post-domain adaptation. Regarding editability, we have performed editing on a real image adapted into a new target domain using StyleGAN-CLIP to discover editing directions in the source domain. The results, as illustrated in the left part of Fig. 4 in the [PDF](https://openreview.net/attachment?id=xNZlQlaQNI&name=pdf), indicate that the adapted generator maintains similar latent-based editing capabilities to the original generator. This demonstrates the preservation of editability in the adapted generator. For latent interpolation, we have presented various results in Sec A.2 of the Supplementary Materials. In conclusion, our experiments confirm that the latent space properties of the generator are well preserved, and our method goes beyond mere overfitting of given styles.
Furthermore, we emphasize that the proposed DoRM approach is not inherently limited to the face domain. The method can be readily extended to these datasets, and we apply our DoRM to the LSUN-church dataset and adapted the pre-trained GAN to generate haunted house image in the Sec. A.5 of the Supplementary Materials. The result shows that our method can maintain the cross-domain consistency between the non-face domain.
**Q4 It would be better to do user study.**
We have planned to conduct a user study in one-shot GDA to enhance our evaluation process. By embedding real images in the generators and collecting feedback from users, we aim to gain qualitative insights that complement the quantitative metrics, offering a well-rounded evaluation of our system's performance. As shown in Table 1 of the [PDF](https://openreview.net/attachment?id=xNZlQlaQNI&name=pdf), preliminary results indicate that users strongly favor our DoRM in all three aspects, reflecting the effectiveness of our approach compared to the alternative methods.
[1] Generalized One-shot domain adaption of generative adversarial networks. NIPS 2022
[2] Dynagan: Dynamic few-shot adaptation of gans to multiple domains. SIGGRAPH Asia 2022
[3] Towards Diverse and Faithful One-shot Adaption of Generative Adversarial Networks. NIPS 2022
---
Rebuttal 2:
Title: Please read the authors' response and give your final rating
Comment: Dear Reviewer cgji,
The author-reviewer discussion is closed on Aug 21st 1pm EDT, could you please read the rebuttal and give your final rating? Thanks so much!
Best,
AC | Rebuttal 1:
Rebuttal: We extend our heartfelt gratitude to all reviewers for their dedicated efforts and invaluable suggestions. We have meticulously addressed the specific concerns raised by each reviewer. For a more detailed breakdown of our responses, including supporting tables and figures, please refer to the attached [PDF](https://openreview.net/attachment?id=xNZlQlaQNI&name=pdf).
Furthermore, we are committed to enhancing the coherence and impact of our manuscript by incorporating these supplementary materials into the revised version. The reviews' insightful recommendations have significantly contributed to refining the clarity and rigor of our research. Notably, we have recognized a recurring interest among the reviewers regarding the contextualization of relevant work, prompting us to provide a General Response. Morever, we also provide some systematic and comprehensive comparisons with related work [2][3] in the Figure 6 of the [PDF](https://openreview.net/attachment?id=xNZlQlaQNI&name=pdf). The results illustrate that our propsoed Dorm is consistently superior to the previous studies.
**General Response: Comprehensive Comparison and Discernment from Pertinent Works [1] [2] [3].**
We are poised to conduct a comprehensive and systematic analysis of pertinent prior research, underscoring the distinguishing facets that set them apart from our own endeavor:
HyperDomainNet [1]: HyperDomainNet employs a sole modulation parameter, mainly a scale ($\delta$), to manipulate convolutional layer weights, impacting the StyleGAN "s" space. This configuration maintains a constant scale across all images within a target domain. Specifically, HyperDomainNet's architecture is formalized as $w \cdot s_i \cdot \delta$, with $w$ and $s_i$ denoting source StyleGAN2's convolutional weight and style code respectively. We recognize that the overarching scale parameter's introduction may inadvertently constrain the generator's learning capacity. Notably, in scenarios where considerable domain gaps exist between the source and target domains, HyperDomainNet's efficacy could diminish. Moreover, the non-linear fusionability of the scale modulation parameter might hinder its efficacy in achieving robust domain association.
DynaGAN [2]: Among methods closely aligned with our DoRM, DynaGAN introduces two modulation parameters—shift ($\Delta s$) and scale ($\delta$)—to convolutional weights, influencing StyleGAN's "s" space. DynaGAN's structural configuration can be expressed as $w \cdot (s_i + \Delta s) \cdot \delta$. Analogous to HyperDomainNet, the non-linear blending of the scale modulation parameter may impede its capacity to establish robust domain associations (as exemplified in the Figure 6 of the [PDF](https://openreview.net/attachment?id=xNZlQlaQNI&name=pdf)). In contrast, our DoRM exclusively adopts a sample-specific domain shift denoted as $\Delta s_i$, which is mathematically formulated as $w \cdot (s_i + \Delta s_i)$. This innovative approach significantly amplifies the generator's aptitude for learning, facilitating seamless adaptation to a diverse array of target domains, even in the presence of substantial domain disparities (as demonstrated in Figure 6 of the [PDF](https://openreview.net/attachment?id=xNZlQlaQNI&name=pdf)). Impressively, this sample-specific domain shift effectively caters to both few-shot and one-shot Generalized Domain Adaptation (GDA), obviating the need for an additional domain scale parameter. Our streamlined methodology not only fosters robust domain association but substantiates its efficacy through empirical validation.
Domain Expansion [3]: Drawing inspiration from SeFA, Domain Expansion constructs a semantic and orthogonal basis V from right singular vectors obtained through SVD of the initial generator layer, impacting the latent space Z. Focusing on a subset of basis vectors, Domain Expansion aptly models source generator variability, repurposing these unexplored subsets to encapsulate desired behaviors for diverse target domains. Unlike alternative approaches, including HyperDomainNet, DynaGAN, and our DoRM, which expand the style space of the source domain to encompass target domains, Domain Expansion instead narrows the style space of the source domain. While this technique repurposes latent space, it inadvertently curtails the source domain's generative capacity, amplifying the intricacies and temporal requisites of domain adaptation. As depicted in the Figure 6 of the [PDF](https://openreview.net/attachment?id=xNZlQlaQNI&name=pdf), Domain Expansion encounters challenges when faced with substantial domain gaps between source and target domains. Evidently, its performance is compromised, as evidenced in instances such as adapting to the 10-shot generation context of the Sketch dataset and the one-shot generation context of the "elsa" dataset.
[1] HyperDomainNet: Universal Domain Adaptation for Generative Adversarial Networks. NIPS 2022
[2] DynaGAN: Dynamic Few-shot Adaptation of GANs to Multiple Domains. SIGGRAPH Asia 2022
[3] Domain Expansion of Image Generators. CVPR 2023
Pdf: /pdf/d07e86915bdb54f36e40a3f1194d2bdb97b9fdf0.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: This paper proposes a novel approach for few-shot generation using a lightweight GAN architecture and a new loss function. The method is capable of handling multi-domain and hybrid domain tasks with a single model. The experiments demonstrate the superior performance of the proposed method in terms of both qualitative and quantitative results.
Strengths: 1. The paper addresses a novel task and presents a unique approach to handle it.
2. The experiments are comprehensive and well-organized, providing strong evidence for the effectiveness of the proposed method.
3. The proposed method outperforms previous methods in terms of both qualitative and quantitative results.
4. The exploration of the generative ability to merge two or three domains for few-shot generation is interesting and innovative.
Weaknesses: 1. The paper should provide more analysis or improvement on the potential issue of unrealistic results due to domain association.
2. The related works should be discussed more thoroughly, especially with respect to HyperDomainNet [1] and DynaGAN [2], which share similar ideas with the proposed method.
3. The proposed method is designed specifically for StyleGAN2 architecture, limiting its generalization to other architectures.
4. It would be beneficial to expand the list of baselines in Table 1 to include additional approaches, such as MineGAN [3] and EWC [4].
5. This paper utilized adversarial training while freezing the discriminator's backbone. However, in my understanding, the discriminator may still overfit the training data due to the adversarial loss. This issue has also been observed in comparable works, like CDC, where the adversarial loss weight is only 1 while the consistent loss weight is 1000, but the overfitting problem persists as training progresses. How about of the proposed approach?
[1] Hyperdomainnet: Universal domain adaptation for generative adversarial networks. NIPS 2022
[2] Dynagan: Dynamic few-shot adaptation of gans to multiple domains. SIGGRAPH Asia 2022
[3] Minegan: effective knowledge transfer from gans to target domains with few images. CVPR 2020
[4] Few-shot image generation with elastic weight consolidation. NIPS 2020
Technical Quality: 4 excellent
Clarity: 3 good
Questions for Authors: 1. The paper should provide more analysis or improvement on the potential issue of unrealistic results due to domain association.
2. The related works should be discussed more thoroughly, especially with respect to HyperDomainNet [1] and DynaGAN [2], which share similar ideas with the proposed method.
3. The generalization of the proposed method should be discussesd.
4. It would be beneficial to expand the list of baselines in Table 1 to include additional approaches, such as MineGAN [3] and EWC [4].
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 4 excellent
Presentation: 3 good
Contribution: 4 excellent
Limitations: Authors have addressed some limitation of this paper. More Limitation should be further addressed: The proposed method is designed specifically for StyleGAN2 architecture, limiting its generalization to other architectures.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Q1. The paper should provide more analysis or improvement on the potential issue of unrealistic results due to domain association.**
As highlighted, our paper represents the first systematic attempt at domain association in few-shot Generative Domain Adaptation (GDA). We recognize the significance of refining our method to produce more realistic results. In our primary paper, we achieve hybrid-domain generation by combining the M&A (Modulation and Activation) modules of different target domains and activating them simultaneously. This approach is simple and easy to implement. However, we acknowledge that further enhancements can be made to improve the performance of domain association. To address this, we propose an additional strategy where we not only combine the trained target M&A modules but also introduce a new M&A module to better blend the target domains. Specifically, we utilize a directional loss based on Contrastive-Language-Image-Pretraining (CLIP) to train these new M&A modules, as depicted in Fig. 1 of the [PDF](https://openreview.net/attachment?id=xNZlQlaQNI&name=pdf). By incorporating this new M&A module and the directional loss, we aim to enhance the realism and fidelity of the associated domains, thus mitigating the issue of unrealistic results.
**Q2. The related works should be discussed more thoroughly.**
Thank you for your valuable feedback regarding the discussion of related works. We fully understand the importance of a comprehensive comparison and analysis of existing methods, especially those that share similarities with our proposed approach. The thoroughly discuss of related works have been demonstrated in the [General Response: Comprehensive Comparison and Discernment from Pertinent Works [1] [2] [3]](https://openreview.net/forum?id=jown9RvYn7¬eId=xNZlQlaQNI)
**Q3. The proposed method is designed specifically for StyleGAN2 architecture, limiting its generalization to other architectures.**
Thank you for your valuable suggestion. StyleGAN2 has gained widespread recognition as one of the most popular architectures in few-shot image generation, serving as the foundation for numerous previous methods. Hence, we have adopted StyleGAN2 in our approach, following the common practice in the field. However, it is essential to emphasize that our method is not confined to StyleGAN2 alone; rather, it exhibits adaptability to any layer-wise generator. This flexibility enables our approach to be applied to various state-of-the-art GAN architectures that employ layer-wise structures, as seen in works such as [4][5][6][7][8]. The prevalence of layer-wise designs in current GAN research makes our method highly versatile and opens up opportunities for its potential application and integration into different generative models. By highlighting this adaptability to diverse layer-wise generators, we aim to underscore the broader scope and applicability of our approach, which can contribute to various GAN architectures and research domains.
**Q4. It would be beneficial to expand the list of baselines in Table 1 to include additional approaches.**
Thanks for your constructive suggestion. We have added the results of MineGAN and EWC in Table 1 as follows:
|Datasets|Babies|Babies|Babies|Sunglasses|Sunglasses|Sunglasses|Sketches|Sketches|Sketches|
| :-----:| :----: | :----: |:-----:| :----: | :----: |:-----:| :----: | :----: |:-----:|
| Methods | FID |I-LPIPS | ID|FID |I-LPIPS | ID|FID |I-LPIPS | ID|
| MineGAN | 98.23|0.514|0.132 | 68.91|0.42|0.171 |64.34 |0.40 |0.092 |
| EWC |87.41|0.523|0.145 | 59.73|0.431|0.156 |71.25|0.42|0.103 |
| DoRM |30.31 |0.623|0.445 | 17.31 |0.644 |0.389|40.05|0.502|0.365|
**Q5. How about of the proposed approach avoid overfitting?**
Thank you for your valuable review of our work. We acknowledge the crucial issue of discriminator overfitting during adversarial training, which can diminish the synthesis diversity of the generator, leading to outputs closely resembling training data. In response, our DoRM introduces the similarity-based structure loss $L_{ss}$ to preserve the generator's diversity in synthesis. An important distinction is that our DoRM has a lighter parameter load compared to CDC, which involves training the entire generator. Additionally, we integrate data augmentation and early stopping techniques into our DoRM to further enhance its performance.
[1] HyperDomainNet: Universal Domain Adaptation for Generative Adversarial Networks. NIPS 2022
[2] DynaGAN: Dynamic Few-shot Adaptation of GANs to Multiple Domains. SIGGRAPH Asia 2022
[3] Domain Expansion of Image Generators. CVPR 2023
[4] Large scale gan training for high fidelity natural image synthesis.
[5] A style-based generator architecture for generative adversarial networks.
[6] Analyzing and improving the image quality of stylegan.
[7] Alias-free generative adversarial networks
[8] Stylegan-xl: Scaling stylegan to large diverse datasets.
---
Rebuttal 2:
Title: Please read the authors' response and give your final rating
Comment: Dear Reviewer Jqfj,
The authors provide a response including tables and analyses. The author-reviewer discussion is closed on Aug 21st 1pm EDT, could you please read the rebuttal and give your final rating? Thanks so much!
Best,
AC
---
Rebuttal Comment 2.1:
Comment: I have carefully read the authors’ rebuttal and the other reviewers’ comments. I think the authors have satisfactorily addressed all my concerns and improved the quality of their paper. Therefore, I maintain my positive score for this paper.
---
Reply to Comment 2.1.1:
Title: Response to Reviewer Jqfj
Comment: Thanks a lot. We appreciate your valuable comments, which have greatly helped us to improve the quality of our manuscript. Best regards. | null | null | null | null | null | null |
Implicit Differentiable Outlier Detection Enable Robust Deep Multimodal Analysis | Accept (poster) | Summary: The paper presents a method for incorporating outlier detection into an end-to-end learnable framework. Essentially the paper shows how it is possible to differentiate efficiently through a per-instance Gaussian mixture model using either unrolling, implicit differentiation or Jacobian-free back-propagation as discussed in recent literature. The paper is well written and experiments combining vision and language for visual-question answering (VQA) are promising.
Strengths: The paper is well written and the proposed method for a differentiable Gaussian mixture model is sound. I am mostly positive about the paper but have a few questions below. I am willing to raise my score depending on the answers provided by the authors.
Weaknesses: As mentioned above, the paper is mostly well written. However, there are a few minor places where the paper could be improved. Some prior related work on differentiable optimization is missing (e.g., Gould et al., TPAMI 2022). These and cited prior related works already discuss the reduced memory costs during back-propagation via implicated differentiation limiting the novelty of the paper (certainly Line 73). Some of the mathematical expressions can be tidied, e.g.,
a. Size of outer parentheses in Eqn (3)
b. Lower limit in denominator summation in Eqn (4) should be "u=1"
And some consistency in referencing tables and figures. E.g., Section 4.1 refers to figures in four different ways "Figure 3a", "Figure3a" (no space), "Fig 3b" (no period), and "Fig. 2" (period, but breaking space). Also tables are referenced with lowercase "table" and figures with uppercase "Figure". I recommend being consistent and using non-breaking spaces between label and reference number.
Also, wrapping tables in the text can result to confusing line breaks, e.g., Line 265.
The above are all minor presentation issues. Two more serious concerns are:
1. The results from implicit differentiation are only valid when the forward pass algorithm actually results in a fixed point, i.e., is run to convergence. Since the method presented in the paper only iterates for a pre-determined number of iterations, convergence cannot be guaranteed. What happens when the resulting GMM parameters have not converged? Can the authors comment on the validity of the gradients obtained?
2. Table 1 compares three different methods for back-propagation. I understand that the time and memory of these methods will vary but I don't understand why the number of parameters in the models should be different (i.e., 152.6M, 125.2M and 124.8M for Vanilla-EM, JB-EM and JFB-EM, respectively). Can the authors please comment. Also, does the time per epoch include execution of the forward pass? I would expect this to dominate the backward pass. Can the authors please provide forward pass timings separate from backward pass timings.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: See questions raised in Weaknesses above. I am willing to increase my score depending on the answers to these questions, especially relating to results reported in Table 1.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations: Limitations were adequately addressed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for taking the time to review our paper. We appreciate your constructive suggestions to illustrate forward pass and backward pass execution time separately in Table 1. We will fix all the typos including in math expressions and keep the references of tables and figures consistent in the revised version. We will address your concerns in the following responses:
>The results from implicit differentiation are only valid when the forward pass algorithm actually results in a fixed point, i.e., is run to convergence. Since the method presented in the paper only iterates for a pre-determined number of iterations, convergence cannot be guaranteed. What happens when the resulting GMM parameters have not converged? Can the authors comment on the validity of the gradients obtained?
**Concern 1:**
Please refer to the general response 3 to see if it is answered your question. As long as the optimization algorithm (JFB-EM) converges to clusters points $\tilde{\mu},\tilde{\sigma}$ such that the gradients calculated is positively correlated to the true gradients, we can decrease the loss function. Thus, it is not necessary that EM iterations solve the clustering problem exactly. In practice, we run for few iterations to get a reasonable $\mu$ and $\sigma$ and use the energy of GEM score to filter OOD samples in the overall pipeline. Thus, the only use of $\mu^*$ and $\sigma^*$ is to calculate OOD score $s$ in equation (4), so the approximate $s$ is sufficient enough for optimization purposes.
>Table 1 compares three different methods for back-propagation. I understand that the time and memory of these methods will vary but I don't understand why the number of parameters in the models should be different (i.e., 152.6M, 125.2M and 124.8M for Vanilla-EM, JB-EM and JFB-EM, respectively). Can the authors please comment. Also, does the time per epoch include execution of the forward pass? I would expect this to dominate the backward pass. Can the authors please provide forward pass timings separate from backward pass timings.
**Concern 2:**
(1) We need to store the intermediate gradients and $\mu$ and $\sigma$ of each iteration for Vanilla-EM and full Jacobian matrix for JB-EM during training, so we consider to add them to the number of parameters.
(2) Yes, the time reported in Table 1 includes the execution time of the forward pass.
(3) We agree with the reviewer that forward pass dominates backward pass since we can reuse the weights available during forward pass.
(4) We update Table 1 with separate forward pass and backward pass timings in the rebuttal PDF and will update it in the revised version.
---
Rebuttal Comment 1.1:
Comment: Thanks for the detailed response. I accept that running EM for a few iterations may give a good approximation to the gradient (sometimes) and work well in practice. I think you have to be careful about claiming that all you need is positive correlation with the approximate and true gradients, i.e., $g_{\mu*, \sigma*}^T g_{\mu, \sigma} \geq 0$. First, this can't be known without actually computing the true stationary values of $\mu$ and $\sigma$. Second, the chain rule does not hold for approximated gradients so just because there is positive correlation with gradients of the loss with respect to $\mu$ and $\sigma$ does not mean that there will be a positive correlation with the gradients of the loss with respect to upstream parameters. Discussion/warning of this should be included in the paper.
---
Reply to Comment 1.1.1:
Comment: Dear Reviewer NYTC,
Thank you for the further discussions. Here, we will clarify what we meant by correlation for a more enthusiastic support of our submission. An upstream layer is any layer before a certain layer -- in our case VK-OOD layer -- and so is closer to input. To simplify terminology, we consider a simplified setup in which language features $l$ in equation (3) are first processed by ReLU based "Upstream" layer parametrized by $W_{\text{up}}$, followed by energy calculation using the output means ${\mu}_k^*$ of EM algorithm. We now look at the gradient computed using an approximate output of EM algorithm.
Formally, we consider the loss function given by $$E(W_{\text{up}};\mu^*):=-\log (\exp (0.5 \cdot \|\ \text{ReLu}(W_{\text{up}} \cdot l) - \mu^* \|_2^2 ))$$
where we assumed number of components to be one i.e., $K=1$ for simplicity. Now, the gradient of the energy score ( equation (2)) itself is given by Chain rule as follows:
$\nabla_{W_{\text{up}}}E(W_{\text{up}};\mu^*) =$ $$ \frac{\partial (-\log (\exp (0.5 \cdot |\ \text{ReLu}(W_{\text{up}} \cdot l) - \mu^* |_2^2 )))} {\partial W{\text{up}}} $$
$$ = - (\text{ReLu}(t_0) - \mu^*) \odot \text{ReLu}(\text{sign}(t_0)) \cdot l^T
$$
where $t_0 := W_{\text{up}} \cdot l$, and $\odot$ denotes the Hadamard or Elementwise product. Importantly, please note that $\nabla_{W_{\text{up}}}E(W_{\text{up}};\mu^*)$ is linear in $\mu^*$. By definition of a {\em descent} direction, it is possible to reduce the loss using an approximate $\tilde{\mu}$ computed with finite iterations as long as $\text{tr}\left(\nabla_{W_{\text{up}}}E(W_{\text{up}};\mu^*)^{\top} \nabla_{W_{\text{up}}}E(W_{\text{up}};\tilde{\mu}) \right)<0 $. In light of this linear relationship, we mentioned the correlation in our response.
We agree with you that this condition is not known to us without actually computing $\mu^*$, and this relationship heavily depend on ReLU layers. We are happy to clarify this in our revision, and consider extensions in our future work. Kindly inform us if our responses addressed your concerns satisfactorily, and do not hesitate to request further elaboration if needed. Thank you once again for your constructive feedback. | Summary: This paper introduces a implicit Differentiable Out-of-Distribution (OOD) detection layer. This layer addresses outlier detection by solving for fixed points of a differentiable function and using the last iterate of fixed point solver to backpropagate.
Strengths: This is a well-organized and written paper. For example, the language of the introduction section, starting from the statement of the advantage of JFB method to the problem of external knowledge-based multimodal methods, the storyline is smooth, natural, and clear.
The paper is self-contained. Most contributions claimed in the introduction section have the corresponding evidence in the experiment section. Ablation study, analysis, and limitations, all of them be included, organized, and discussed to support the points of this paper.
Finally, such a JFB idea is pretty interesting.
Weaknesses: 1. As my understanding, VK-OOD is a method that would not incur many additional parameters. Why is the number of parameters in Table 4 between the baselines and the proposed method have such a big gap (e.g BLIP vs VK-OOD(BLIP), 346M vs 412M)? Does it a fair comparison?
2. This paper did a simple idea, making the iterative algorithm (e.g k-means) end-to-end with networks but making it faster and more efficient by JFB method. I don't know if it is a real problem in that community. Does it really matter? Like the results shown in table 1, basically, the improvement is minor.
This paper is definitely a good paper, I could realize the professionalism of the authors. My concerns are mainly about the contribution. If the authors could offer convincing reasons, I would consider raising my score.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: refer to the weaknesses part.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: Has been discussed in the main paper, and I agree with that.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for taking the time to read and review our paper. We are glad that you found our paper to be well-written and the proposed JFB for outlier detection idea to be interesting. Please check our general responses for the clarification on the
model parameters and performance firstly. Then, we will clarify your concerns about the number of parameters and our contributions, please see the responses below:
**Number of parameters - W1:** In VK-OOD, we update $\mu$ and $\sigma$ during the training, which will bring additional parameters. Please refer to the clarification for the number of parameters in the general response 1 and check the updated Table 4 in the rebuttal PDF. As results shown in updated Table 4, the **scalar** VK-OOD also outperforms other baselines with **marginal** increase in the number of parameters, such as VK-OOD-s (BLIP) vs BLIP: 346.4M vs 346M.
**Model performance - W2:** Indeed, Table 1 shows the proof of principles w.r.t time and memory costs of using JFB based OOD feature detection in multimodal pipelines. For the major improvements, please refer to Table 2 and Table 4 to see more details. The results indicate that VK-OOD achieves significant improvements (upto $\approx$ 5 -10 \% increased accuracy) compared to what baselines achieve in the same settings.
**Our contributions:**
As noted by other reviewers, we tackle the complex challenging problem of integrating explicit and implicit knowledge in end-to-end multimodal pipelines, aiming to enhance performance while also reducing computational resources. Fortunately, our proposed method is differentiable, has efficiency advantages and can be applied to different datasets and multimodal backbone networks or upstream feature extractors. Furthermore, we provide comprehensive and well-conducted experiments, which illustrate promising understanding and generalization performance improvements on multiple downstream tasks. In short, our main novelty and contribution is the application of JFB method for outlier detection with slightly more parameters in practical multimodal settings.
---
Rebuttal Comment 1.1:
Title: Feedback by Reviewer MwwQ
Comment: Thanks for the response, and I raised my score to borderline accept. | Summary: The paper presents an approach that combines the features from pre-trained deep networks and freely available semantic explicit knowledge. It proposes an implicit out-of-distribution (OOD) detection layer to address outlier detection and thus further improve understanding and generalization performance in large-scale vision and language settings. It offers comprehensive explanations of the proposed approach, along with experimental results and comparisons against other methods.
Strengths: 1. The proposed method is utilizing outlier detection to improve model training, instead of solely for outlier detection.
2. The proposed method is differentiable, efficient and can be applied to different datasets and multimodal backbones.
3. Comprehensive experiments.
Weaknesses: 1. The iterative method, though optimized for efficiency, still has concerns in training time.
2. Adding the proposed method sometimes causes degradation as shown in Table 4.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. In Table 4, how do you interprete that VK-OOD(BLIP) is worse than BLIP in COCO and Flickr30k (especially yours has more params)?
2. How much more training resource/time were added when adding VK-OOD to other SOTA vision-language models?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The iterative method, though optimized for efficiency, still has concerns in training time.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for reviewing our paper. We will address your concerns/questions regarding the training time and model performances as follows:
**Q1:** Please refer to general response 1 and 2, and check the updated Table 4 in the rebuttal PDF. During training, we augmented each captions with 5 external knowledge triplets. Thus, considering the ***noise*** level of textual feature space, the recall drop shown in Table 4 is **marginal**.
**Q2:**
We pre-train on three datasets with total of 1M images and 6.8M image-caption pairs, which is approximately **30\% less** than what the baselines such as ViLT used in their training. Each caption is augmented with 5 external knowledge triplets. We trained VK-OOD-l (ViLT) on the aforementioned training set with 50k steps on 8 NVIDIA 2080Ti GPU and it took around 2.5 days only. Thus, the overall training time and resources are much more less than ViLT and BLIP. However, our proposed model obtains significant improvements in all downstream tasks with training on less samples. Each sample in VK-OOD-l (ViLT) **only** requires $\approx$ 3ms more than the baseline(ViLT) in the dense case. Our results seem promising in this way!
---
Rebuttal Comment 1.1:
Comment: Thanks for the reply. I have read the rebuttal. | Summary: This paper introduces a novel approach to integrating explicit knowledge graphs into deep networks for multimodal analysis. To filter noise brought by external knowledge, the authors propose an implicit differentiable Out-of-Distribution (OOD) detection layer with efficient backpropagation. This implicit layer comprises the Expectation Maximization (EM) steps of Gaussian Mixture Models (GMM). To efficiently differentiate through the OOD detection implicit layer, Jacobian-free backpropagation was applied in the final optimization step. The proposed OOD detection layer demonstrates state-of-the-art results with significantly fewer samples and less training time.
Strengths: - The experiment section is well-structured. Clarity of this section is a plus.
- The ablation study is carefully conducted, revealing the influence of hyperparameters and external knowledge resources and the effectiveness of efficient backpropagation.
- Despite having 30% incomplete data, the proposed implicit layer outperforms previous models in the OOD settings. This demonstrates the OOD generalization of implicit layers.
- Informative qualitative results demonstrate how the proposed model performs in the real VQA datasets.
Weaknesses: - Missing references. The current manuscript doesn't effectively integrates previous research into the discussion. The following references are highly relevant and should be acknowledged:
- Paper [1] is a landmark work in implicit models.
- Paper [2] proposed the one-step gradient for backpropagating through iterative algorithms, substantially overlapping with and earlier than the efficient differentiation in this work. Both paper [2] and this work leverage non-convex optimization layers of multiple variables.
- Paper [3] systematically studies the inexact gradient for implicit models in theory and practice.
- Papers [4,5] delve into the Out-of-Distribution (OOD) generalization of implicit layers and the benefits from path-independence/convergence.
- Paper [6] discusses the adversarial robustness of implicit models.
- The method section can be rewritten for better clarity. The paper could benefit from an improvement in notation clarity.
Typos:
- Line 314: "saccelerater" should be "accelerate".
[1] Bai, Shaojie, J. Zico Kolter, and Vladlen Koltun. "Deep equilibrium models." Advances in Neural Information Processing Systems 32 (2019).
[2] Geng, Zhengyang, Meng-Hao Guo, Hongxu Chen, Xia Li, Ke Wei, and Zhouchen Lin. "Is Attention Better Than Matrix Decomposition?." In International Conference on Learning Representations. 2020.
[3] Geng, Zhengyang, Xin-Yu Zhang, Shaojie Bai, Yisen Wang, and Zhouchen Lin. "On training implicit models." Advances in Neural Information Processing Systems 34 (2021).
[4] Anil, Cem, Ashwini Pokle, Kaiqu Liang, Johannes Treutlein, Yuhuai Wu, Shaojie Bai, J. Zico Kolter, and Roger B. Grosse. "Path Independent Equilibrium Models Can Better Exploit Test-Time Computation." Advances in Neural Information Processing Systems 35 (2022).
[5] Bai, Shaojie, Zhengyang Geng, Yash Savani, and J. Zico Kolter. "Deep equilibrium optical flow estimation." In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2022.
[6] Yang, Zonghan, Tianyu Pang, and Yang Liu. "A Closer Look at the Adversarial Robustness of Deep Equilibrium Models." Advances in Neural Information Processing Systems 35 (2022).
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: - How was the OOD detection layer initialized for the EM algorithm? Given that EM may fall into local optima, how does this influence the model's performance in the OOD settings?
- The accuracy gain from iterations doesn't appear to plateau in Figure 3(b). Is the EM algorithm convergent at the maximum iterations? Would it benefit from additional iterations?
- Could you plot the fixed point error $|\mu_{t} - EM(\mu_{t})|$ alongside the optimization steps and include this plot near Figure 3(b)?
- Given the paper's claim of the robustness of the OOD detection layer, I am curious about the adversarial robustness of the proposed VK-OOD in the VQA settings.
- Specifically, could any input perturbations/attacks impact the convergence or stability of the implicit layer? It would be useful for authors to monitor the convergence using the aforementioned fixed point error.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The paper discusses the limitations of the proposed OOD detection implicit layer, stating that it could infer inefficiently if the covariance matrix is parameterized densely. A fast linear system solver could potentially rectify this inefficiency. However, these extensions was left for future works.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for taking the time to read and review our paper. We are glad that you found our proposed method with OOD detection implicit layer novel and has efficiency benefits. We have started working on integrating suggested previous related research in the manuscript and will incorporate all the modifications into the revised version. Please check our general responses for the clarification on the model parameters and performance. We now address your specific questions:
**Q1: How did you initialize your EM algorithm?**
We initialized the OOD detection layer parameters $\mu$ and $\sigma$ using random $k$ number of vision features of the inputs. Our OOD detection layer may fail during inference. However, for training purpose, approximating $\mu^*$ and $\sigma^*$ might be sufficient. With an approximation $\tilde{\mu}$ and $\tilde{\sigma}$, we use the GEM score to filter OOD features in multimodal pipeline. In the experimental results, Table 2 in the paper illustrates that the performance of the model with OOD detection layer is better than the models w/o OOD detection layer in downstream tasks.
**Q2: Is the EM algorithm convergent at the maximum iterations? Would it benefit from additional iterations?**
Please see the general response 3. Perhaps there is a misunderstanding, but we could not think of any reason why accuracy should plateau in Figure 3(b), because the x-axis in Figure 3(b) is the number of triplets/clusters. We think the reviewer may refer to Figure 3(a) which x-axis is $T$ iterations where we can observe plateauing.
Yes, we believe more iterations may be beneficial. Although, considering the training costs, we conduct ablation studies on $T \leq 10$ and the improvements in term of accuracy are slow after $T \geq 5$. Moreover, the main goal of OOD detection layer is to find the approximate clusters for the inputs.
**Q3: Can you provide empirical convergence plot of $\mu,\sigma$ of In-Distribution (ID) features using EM algorithm?**
We have provided the fixed point error plot over iterations in the rebuttal pdf. We find that the squared euclidean distance between successive iterates indeed goes to zero. We will add this plot in Figure 3 in the revised version, thank you for the suggestion.
**Q4: How is Adversarial Training related to OOD detection?**
Thank you for the question. We can use one of components $\tilde{\mu}_k$ and $\tilde{\sigma}_k$ to be one of the mixtures to handle adversarial samples. To calculate this component we can use an appropriate adversarial sample generation or attack algorithm at the feature level. We leave this for future work.
**Q5: Could any input perturbations/attacks impact the convergence or stability of the implicit layer?**
Yes, it can impact convergence behavior of EM algorithm, and hence our layer. Designing robust outlier detection is a great suggestion and still remains an open problem, so we will consider the problem for future work.
---
Rebuttal Comment 1.1:
Comment: Dear Reviewer oiir,
We would greatly appreciate your feedback on whether our rebuttal has adequately addressed your concerns, and we are more than happy to answer any other remaining questions you may have. We sincerely value your feedback. Many thanks in advance!
---
Rebuttal Comment 1.2:
Title: Thanks for response
Comment: Thanks for the author's response! It has addressed many of the points I raised. Considering the limited discussion time left, I have first adjusted my score to 5.
That said, several concerns still persist. Addressing these is essential prior to the publication of this work. I believe there's room for a deeper exploration into the advantages of convergence and the role of implicit layers, which could elevate the overall quality of the paper.
- **Q2**: Sorry for the typo. It should be Figure 3(a). Figure 3(b) in the submitted PDF is nice.
- **Q3**: The essence of my third question (and Q2) is how the ID and OOD setup influence the fixed point convergence. Specifically, is there a notable difference in convergence behavior between the two setups? Does it take more iterations to converge in the OOD setup? How does convergence associate with performance in both ID and OOD setups? I did not see a particular figure to compare the ID and OOD convergence. There is only one plot for convergence that illustrates convergence under two gradient methods.
- **Q4**: Thanks for replying to this question. Incorporating this discussion as a 'future work' section in the paper would be beneficial.
- **Q5**: I understand that implementing attacks can be a bit complex and is likely more suited for future work. However, **assessing a model's generalization through simple input perturbations over a pretrained checkpoint could be feasible in the current scope** and is at an acceptable workload. Introducing Gaussian noise or dropping patches are potential candidates to consider. This request is because we can control the distribution gap by noise levels or drop rates, which can help investigate how the EM implicit layer improves OOD performance. Ideally, I anticipate a plot delineating multiple curves derived from varying noise levels or drop rates, with accuracy plotted on the y-axis against the number of EM steps on the x-axis. Additionally, it would be informative to have a plot illustrating curves from noise levels/drop rates against the convergence $| \mu - EM(\mu) |$, over EM steps. This question extends Figures 3(a) and 3(b) to understand the generalization under the growing distribution gap. If possible, adding a table to the discussion can be helpful. I understand making plots for paper would take some time.
---
Reply to Comment 1.2.1:
Title: Thank you for the further discussions
Comment: Dear Reviewer oiir,
Thank you for the constructive feedback. We will incorporate more and deeper convergence analysis of our proposed implicit layer in revision, including analysis (and tables/figures) of both ID and OOD setups, and converged rate over optimization steps with different noise levels.
**Q3:** The OOD samples are not present within the training datasets themselves. Instead, in our proposed method, we encounter outliers when integrating external knowledge triplets into the training pipeline. In other words, if we denote $M$ as the number of external knowledge triplets, then $M=0$ corresponds to the ID setup that you have mentioned. For this ID setup, we included the model performance using our implicit layer in Table 2 in the paper. Now, we present some quantitative results to answer your question. To delve into this further, we consider two setups: one with $M=0$ i.e., ID setup, and OOD setup with $M=5$ that correspond to augmenting features from external knowledge. We can see from the results in the following table that there is not a significant difference from the the rate of convergence perspective --- as indicated by squared norm of successive iterates $\|\mu_t-\mu_{t+1}\|_2^2$ --- in both setups viz., ID and OOD setups. However, from the Accuracy column in the following table we conclude that the performance in VQA tasks has significant improvements over iterations in when considering external knowledge, as in the OOD set up with $M=5$.
| | ID(M=0) | | OOD(M=5) | |
|:--------------:|:-------------------------:|:--------:|:-------------------------:|:--------:|
| T (iterations) | $\|\mu_t-\mu_{t+1}\|_2^2$ | Accuracy | $\|\mu_t-\mu_{t+1}\|_2^2$ | Accuracy |
| 1 | 1.94 | 73.6 | 2.15 | 73.1 |
| 3 | 0.059 | 73.8 | 0.089 | 74.8 |
| 5 | 0.038 | 73.9 | 0.051 | 76.1 |
| 8 | 0.042 | 73.9 | 0.036 | 76.5 |
| 10 | 0.036 | 74.1 | 0.034 | 76.8 |
**Q5:** Thank you so much for the suggestion because we have been thinking about it as well! In fact, to evaluate the sensitivity of our model with respect to the OOD detection performance, we already included some
experiments of incomplete knowledge triplets with missing values in Section 4.1 in the paper. In the implementation of this section, we only dropped language features inputs due to computational reasons -- augmenting image patches requires more resources such as GPU support. However, since $x$ is used for both visual and language features, the implementation remains the same as that of dropping patches in language features. Hence, we assume that it is equivalent to drop either the visual or the language features since the computational effort involved in running EM algorithm depends only on the total number of features. With this assumption, we now present more results of $\|\mu_t-\mu_{t+1}\|_2^2$ and OKVQA performance in term of accuracy over optimization iterations in the table below for easy reference. With higher level of incompleteness, the rate of convergence is slower. However, there is accuracy gain over iterations in all settings.
| | 25\% incompleteness | | 50\% incompleteness | | 75\% incompleteness | |
|:--------------:|:-------------------------:|:--------:|:-------------------------:|:--------:|:-------------------------:|:--------:|
| T (iterations) | $\|\mu_t-\mu_{t+1}\|_2^2$ | Accuracy | $\|\mu_t-\mu_{t+1}\|_2^2$ | Accuracy | $\|\mu_t-\mu_{t+1}\|_2^2$ | Accuracy |
| 1 | 2.213 | 48.2 | 2.448 | 47.3 | 2.735 | 44.6 |
| 3 | 0.174 | 48.9 | 0.214 | 47.7 | 0.208 | 45.1 |
| 5 | 0.065 | 50.6 | 0.092 | 48.6 | 0.244 | 46.0 |
| 8 | 0.057 | 51.2 | 0.108 | 49.1 | 0.112 | 46.6 |
| 10 | 0.051 | 51.8 | 0.084 | 49.4 | 0.167 | 46.9 |
Thanks again for these valuable technical suggestions. We will properly incorporate both of these tables within our plots and the additional discussions in our revision. | Rebuttal 1:
Rebuttal: Thank you for taking the time to read and review our paper. We appreciate the reviewers unanimously agree that our submission is sound, well-presented with contributions clearly written. We are committed to fix all the typos and incorporate all other suggested modifications into the revised manuscript. In this response,, we will address all the technical concerns and questions to further strengthen support for our submission.
## 1. Why are the number of parameters higher than the baselines in Table 4? (Reviewer uxWt, h1Px, MwwQ)
To clarify, the parameters ($\mu$ and $\sigma$) of our proposed OOD detection layer can be initialized in two ways: as a **scalar** $\sigma$ or the **dense** matrix where $\sigma$ is a $d \times d$ matrix with d representing the dimension of input embeddings. The number of parameters of OOD detection layer in the scalar case is $d \times k \times \text{batch size} + k \times \text{batch size}$, and the dense case is $d \times k \times \text{batch size} + d \times d \times k \times \text{batch size}$. Note that if we opt for $\sigma$ as a diagonal matrix, each component of the mixture model will require $2d$ parameters, as opposed to just $d$. In our implementation, $d$ is 768, $k$ is 5 and batch size varies depending on different backbones, such as 32 or 64. We have provided an **updated Table 4** with two cases in the rebuttal PDF. Importantly, in the updated Table 4, the number of parameters in scalar VK-OOD are approximately similar to the baselines. Specifically, please consider to compare to other baseline models, while our *scalar* VK-OOD increases the \#parameters slightly -- ***$\approx 0.4$*** million more parameters, it **significantly** *improves* the performance in many of the downstream tasks that we considered in our submission.
## 2. What improvements does VK-OOD demonstrate in Tables 1 and 4? (Reviewer uxWt, h1Px, MwwQ)
In **updated** Table 1 in the rebuttal pdf, we highlight the empirical advantages (e.g., computational efficiency) of the Jacobian Free Backpropagation (JFB) method in multimodal settings compared to alternative gradient calculation methods. The results illustrate that the proposed JFB based implicit layer is beneficial to reduce gradient computation costs in term of time and memory usage.
Reviewers will be able to see that **updated** Table 4 in rebuttal pdf clearly shows model performance improvements in term of appropriate evaluation metrics on specific downstream tasks as we explain as follows:
- For the visual understanding tasks (such as VQA and NLVR), both our scalar and dense VK-OOD seem beneficial. For example, in NLVR2, our dense VK-OOD achieves the best result in terms of accuracy with **10\% and 1.7\% increase** than ViLT and BLIP respectively. The scalar VK-OOD also outperforms all baselines with similar number of parameters in the understanding tasks shown in the updated Table 4.
- For the retrieval tasks, the benefits of using VK-OOD is not so significant, and but shows the robustness of the proposed implicit layer for filtering features. We will explain why this empirical result is natural. Please recall that in the experiments, the goal in retrieval is to retrieve images from text queries or to retrieve texts from images. Now, since we augment captions with external knowledge which are known to induce ***noisy textual features***, the slight improvement in image/text retrieval tasks in COCO and Flickr30k dataset is expected. The numbers reported by the baseline have no such external knowledge, indicating that our implementation works as intended! Finally, even in the retrieval tasks, it may be possible to improve performance by adjusting the noise levels (number of external knowledge triplets) during the fine-tuning process.
## 3. Is it necessary to calculate the fixed point exactly for training in multimodal pipeline? Is it ok to use good EM with few iteration for clustering multimodal features? (Reviewer oiir, NYTC)
No, for training purpose, an approximate $\mu^*$ and $\sigma^*$ is sufficient. In VK-OOD training pipeline, we only require that the gradients provided by the proposed OOD detection layer to be ***a descent direction** with respect to the loss as a function of network parameters*. Thus, as long as the gradient backpropagated is correlated (or has a nonnegative inner product) with the true gradient of upstream layers, the updated weights will still *decrease* the loss function, thereby making optimization progress. Formally, by nonnegative correlation, we need the gradient calculated to satisfy: $\left(g_{\mu^*,\sigma^*}\right)^{\top} g_{\tilde{\mu},\tilde{\sigma}} \geq 0$, where $\tilde{\mu}, \tilde{\sigma}$ is an approximation returned by finite iterations of EM algorithm of the true clusters $\mu^*,\sigma^*$.
Pdf: /pdf/edc6db76d1e800499fa1b0b4ea4b3b7261d05083.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: This paper proposes a framework for multi-modal (text and image) analysis subject to a differentiable framework for out-of-distribution detection in the input space. The method proposes a pipeline to predict the modes of the in-distribution data using a Gaussian mixture model and then leverages it to predict an out-of-distribution (OOD) score for knowledge triplets. These OOD scores are then used to weigh image-text matching as the training objective. The multi-modal encoded features are then used for down-stream applications such as VQA.
Strengths: The paper leverages the GEM [33] out-of-distribution detection (in a differenetiable setup) in the context of text-anomaly-detection, which it then used for image-text score matching.
Weaknesses: While the method seems to have potentials for OOD detection, there are a couple of shortcomings regarding its motivation and its usage for the downstream applications:
- One of the motivations of the paper in the abstract is the difficulty of fine-tuning and deployment as the model size increases. However, compared to other baselines in Table 4, the model has higher number of parameters. I was expecting a model much lighter but with a comparable or slight degraded performance. Compared to simple EM baselines in Table 1, the approach has about 12% faster training time, but has almost similar FLOPS and params (compared to JB-EM). I think the motivation is not well justified for this approach.
- While the OOD detection component should theoretically boost the performance compared to other baselines, in table 4, there is not a big gap in-between methods. By using a ViLT backbone the model gains a reasonable boost compared to the same model (still having more params). Using BLIP backbone this is not the case. The question is whether the OOD component is needed or the performance gain is due to other architectural and training components? Some of these baselines do not seem to be using any OOD component yet they obtain decent/similar results.
- (related to above) It is not clear how much OOD samples exist in the training data and how much it is needed in a model. While the approach seems interesting, it is not clear how much it is useful given the application.
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: I went over the paper a couple of times to fully understand it, some parts were not fully clear
- In line 105, x is described as input features including triplets. I am assuming that x only contains the text data and not visual data. Given x_i, mean and standard deviation (std) of the in-distribution (ID) data is computed. Knowing the mean and std directly depends on x_i, are there out-of-distribution data in x_i when computing these values? If so, what's their percentage and how much they can impact the obtained mean and std?
- What is the matching score function m() in Eq. (4)? This is not defined.
- Given the image v_q and triplet l_j similarity score p(v_q, l_j) in Eq. (4), how the final p(v,l) is computed in Eq. (5)? Is it for a specific triplet or some sort of aggregation over all triplets is taken?
- Given the above description my understanding is that the OOD is measured only on the textual data in Eq. (3) of the paper. However, the training objective is to obtain an image-text matching score. Why OOD is computed only on text and not on the joint representation of image and text? Isn't the text-image score m() already capturing the ODD cases which is more relevant for the objective (image-text matching)?
Typos:
- abstract (The following line should be fixed -- remove but): When these models are used for prediction, but they may fail to capture important semantic information and implicit dependencies within datasets.
- start (*) for sigma in Eq. (3) is missing.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 2 fair
Limitations: No concerns
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for taking the time to read and review our paper. We would like you to kindly read our general responses for the clarifications on the model parameters and performance. Here we will address your specific concerns and/or questions:
>One of the motivations of the paper in the abstract is the difficulty of fine-tuning and deployment as the model size increases. However, compared to other baselines in Table 4, the model has higher number of parameters. I was expecting a model much lighter but with a comparable or slight degraded performance. Compared to simple EM baselines in Table 1, the approach has about 12% faster training time, but has almost similar FLOPS and params (compared to JB-EM). I think the motivation is not well justified for this approach.
**What are our motivations?** Our goal is to integrate existing external knowledge into multimodal pipelines. However, upon further investigation, we noticed that external knowledge could be noisy and can hinder training. With this motivation, we developed a differentiable outlier detection layer with decent theoretical and practical advantages. For example, please see the general response 1 and 2, and the rebuttal PDF for the updated Table 4.
>While the OOD detection component should theoretically boost the performance compared to other baselines, in table 4, there is not a big gap in-between methods. By using a ViLT backbone the model gains a reasonable boost compared to the same model (still having more params). Using BLIP backbone this is not the case. The question is whether the OOD component is needed or the performance gain is due to other architectural and training components? Some of these baselines do not seem to be using any OOD component yet they obtain decent/similar results.
**Why there is not a big gap in-between methods?** Please refer to the general response 2. Moreover, as the results shown in Table 2 indicate, both KG and OOD component provide nontrivial gain in understanding and generalization performance in large-scale vision and language settings. Furthermore, we demonstrate the scalability of VK-OOD across different backbone models with diverse architectures and model sizes. Our proposed implicit layer can be regarded as a plug-and-play module for the multimodal pretrained models, making it easy to incorporate with other vision-language models, **as can be seen in our submitted code**.
>It is not clear how much OOD samples exist in the training data and how much it is needed in a model. While the approach seems interesting, it is not clear how much it is useful given the application.
**What are OOD samples and why we need them?**
The OOD samples are not present within the training datasets themselves. Instead, in our proposed method, we encounter outliers when integrating external knowledge triplets into the training pipeline. For example, when extracting external knowledge triplets using the given captions, certain concepts/objects might be out-of-distribution (OOD) w.r.t textual/visual features present in training datasets. For a detailed illustration of the OOD examples, please refer to Figure 5. In the experimental setup, we perform ablation studies on VK-OOD components as detailed in Section 4.2. As results shown in Table 2, the models with OOD detection components consistently outperformed those lacking OOD detection layers in both settings.
**Q1: Does $x_i$ include visual features and textual features? What is the fraction of OOD samples?**
Yes! So, $x_i$ represents the union of features from text and image data. Our key idea is to use a mixture model to approximate the distribution of "In-distribution" features as a mixture model parameterized ($\mu_k^*$ and $\sigma_k^*$) using this union of features. Intuitively, if an image/language feature $x_i$ is close to at least one of these components, then they will be considered in-distribution. In equation (3) we show only score calculation for $l_j$ for simplicity, but we are happy to clarify this under the equation.
In our experiments, we interpret the features from images and captions existing in the training dataset to be In-Distribution (ID) samples, and then we augment the captions from external knowledge which may be OOD samples. In particular, if all external triplets are OOD, the training set will contain $\approx 70\%$ OOD samples at most in the experiments since we augment $\approx5$ external triplets for every textual input.
**Q2:** The matching score function $s$ in equation (4) is the calculation of cosine similarity between image and textual(triplet) features. We will clarify this in the revised version.
**Q3:** The calculation of $p(v_q, l_j)$ is given in equation (4), and then we average all image-triplet pairs, as denoted by Expectation ($\mathbb{E}$) in Eq. 5, during the training.
**Q4: Is it possible to use OOD visual feature in your framework?**
Yes, of course! In our current implementations, we use textual external knowledge resources which are easy to incorporate. Thus, we consider to filter OOD samples from the textual triplets. Our framework can indeed be modified to handle visual OOD features as well, and we will include this feature in our code release. In the future work, we will consider to directly integrate visual knowledge bases, such as Google Images.
---
Rebuttal Comment 1.1:
Comment: Dear Reviewer uxWt,
We would greatly appreciate your feedback on whether our rebuttal has adequately addressed your concerns, and we are more than happy to answer any other remaining questions you may have. We sincerely value your feedback. Many thanks in advance!
---
Rebuttal 2:
Title: final discussions
Comment: Dear Reviewer,
As discussions come to an end soon, this is a polite reminder to engage with the authors in discussion.
Please note we take note of unresponsive reviewers.
Best regards,
\
SAC | null | null | null | null | null | null |
Learning Reliable Logical Rules with SATNet | Accept (poster) | Summary: The authors propose a new framework to generate interpretable and verifiable logical rules through differentiable learning. The framework is built upon SATNet, but the paper proposes a new interpretation method called maximum equality to decode the weights of SATNet into logical rules. The paper also proposes several verification methods to validate the decoded rules against ground truth rules. Experiments show that the decoded rules are highly reliable and functionally equivalent to the ground truth rules.
Strengths: - The impressive contribution of this paper is to distill the black box weights of a differentiable logic program like SATNet into human interpretable rules (boolean expressions) that can be verified.
- Moreover, while the initial SATNet itself fails to achieve 100% test accuracy, the decoded interpretable rules manage to achieve perfect 100% test accuracy. This is remarkable because in the realm of logical reasoning, true generalization is a binary outcome: either the logical rules have truly been learned or they have not. In this paper, they show that they indeed have been learned, and verifiably so.
Weaknesses: - A major caveat to this work is that the 100% test accuracy was achieved by adding symbolic constraints that represent domain knowledge (e.g. rules of Sudoku). However, this is not necessarily a weakness, because incorporating domain knowledge is a useful thing to do in general, and it's great that the authors propose a recipe for doing so.
- It would be nice to see the authors apply their technique to visual Sudoku. That was one of the hallmark contributions of the original SATNet paper that learned to play Sudoku by looking at images of digits.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: - The paper mentions that the thresholding function was set to be approximately one-fifth of the average absolute value of non-zero elements in C. How was this decided, and how sensitive is the performance relative to the chosen threshold? It would be nice to see a graph of this.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The limitations section was well-written. If possible, I'd like to see an extended discussion of future work (perhaps in the Appendix), so the authors can provide useful directions for future research.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the positive feedback and comments! In the following, please let us address your raised concerns and questions.
> The 100% test accuracy was achieved by adding symbolic constraints that represent domain knowledge (e.g. rules of Sudoku).
We want to clarify that our approach could achieve 100% accuracy *without* adding any additional constraints on the 4 $\times$ 4 Sudoku dataset when employing the exact solver Gurobi on our decoded logical rules, and we can also verify the decoded rules satisfy the *unique functional equivalence* with the ground truth rules. Given the vast input space of 4 $\times$ 4 Sudoku puzzles, exhaustively enumerating all IO pairs for *general functional equivalence* verification is infeasible. Instead, we discover that by incorporating *partial* Sudoku rules (e.g., the constraints that each cell can only contain one digit) into our decoded rule sets, we can efficiently verify that our logical rules satisfy *the sufficient condition* for general functional equivalence. Furthermore, in practical real-world scenarios, we acknowledge the potential benefit of utilizing *partial* domain knowledge/commonsense knowledge (as prior symbolic constraints). By applying our approach to learn interpretable rules from data and subsequently adding this domain knowledge into our rule sets, we can further enhance the reliability of our decoded logical rules. This adaptability to integrate domain-specific information makes our approach more versatile and suitable for real-world applications.
> It would be nice to see the authors apply their technique to visual Sudoku.
Thanks for the suggestions. We conduct additional experiments on 4 $\times$ 4 visual Sudoku datasets. Similar to the non-visual setting, using SATNet and SATNet* alone could achieve a solving accuracy of 99.83\% and 99.28\% respectively, whereas using exact solver Gurobi on our decoded rules could achieve 100\% accuracy. Moreover, we further verify that the decoded rules in this setting satisfy the *unique functional equivalence* as well.
> How was the threshold function for sparsity decided, and how sensitive is the performance relative to the chosen threshold?
The threshold function we design is based on our empirical evaluation. During our experiments, we observe that the expressivity of the decoded rules remains relatively stable across different threshold values. To validate this further, we conduct additional experiments using various threshold values for SATNet* on the 4 × 4 Sudoku dataset. The results are summarized below (denoting the average nonzero elements of $C$ as $\mu$):
| threshold | Sparsity | Solving Accuracy with Gurobi | Solving Time with Gurobi (per instance)|
| :-: | :-: | :-: | :-: |
| 0.05 $\mu$ | 0.12 | 100\% | 4.37s |
| 0.10 $\mu$ | 0.15 | 100\% | 3.91s |
| 0.20 $\mu$ | 0.53 | 100\% | 0.92s |
| 0.25 $\mu$ | 0.53 | 100\% | 0.93s |
| 0.50 $\mu$ | 0.57 | 100\% | 0.66s |
| 0.60 $\mu$ | 0.74 | 100\% | 0.37s |
> More discussion on future work.
We further discuss one possible way to improve our work as future work. As stated in our paper, our approach may result in a weighted MaxSAT formula with $O(n^2)$ clauses in the worst case. One promising avenue for improving our approach is to address the challenge of learning lifted rules. By effectively learning and incorporating lifted rules into the model, we can significantly reduce the size of the resulting rule sets. For example, representing concepts such as rows, columns, and $3 \times 3$ squares on the Sudoku board as variables would allow us to capture more concise and interpretable rules. Accomplishing this involves developing a domain-specific language (DSL) that efficiently represents a wide range of logical rules. The designed DSL would enable the lifting process, allowing SATNet to learn and leverage high-level concepts and relationships, leading to improved performance and compact decoded rule sets.
---
Rebuttal Comment 1.1:
Comment: Thanks for the response. | Summary: This paper builds on the SATNet framework on MaxSAT problems, adds an interpretation method that allows the conversion between its weight and propositional logical rules. Effective verification methods are proposed to see if the decoded rules from SATNet are functionally equivalent to the ground truth. Experiments on stream transformation and Sudoku problems show that the decoded "hard" logical rules achieve better performance than SATNet itself. The logical rules are verified to be equivalent to the ground truth.
Strengths: - The proposed interpretation method is well motivated and grounded theoretically.
- The proposed interpretation method allows for the integration of symbolic knowledge as additive weights, which is flexible for the injection of human domain knowledge.
- The verification methods are sound and have been practised on the experimental tasks of stream transformations and Sudoku.
- The paper is well-written.
- The decoded logical rules outperform SATNet on the experimental tasks, showing its practical effectiveness.
Overall, it's a solid contribution to the interpretability of deep learning and neuro-symbolic computation communities.
Weaknesses: For the machine learning readers, perhaps the authors can give a bit more elaboration when propositions are stated. A couple of worked examples can also help the reader understand a lot better.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: What challenges do you foresee when extending this work to more expressive logics like FOL and HOL, apart from the scaling issues mentioned in the discussions section? Are there fundamental bottlenecks to this? I would like to see more discussions on this, as propositional logic seems slightly limiting.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 4 excellent
Limitations: The authors addressed the limitations and scoped the work well.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks a lot for your positive review and valuable feedback! We would elaborate on our propositions in our revision. In this response, we discuss the challenges we anticipate when extending our work to more expressive logic like FOL and HOL:
* Representing logic rules in FOL/HOL using a simple matrix form and formulating the entire procedure as a clear mathematical optimization becomes challenging. Unlike propositional rules, these logics involve more complex structures (especially with quantifiers) that are difficult to encapsulate within a continuous optimization framework. To extend our work to more expressive logic, we anticipate the need for innovative approaches that incorporate efficient *quantifier elimination* as well as *lifted inference* techniques into the differentiable optimization process.
* FOL/HOL entails a significantly larger rule space compared to propositional logic, making it difficult to learn from data *without predefined constraints*. The expanded search space poses challenges in identifying meaningful and interpretable rules from the vast possibilities. Addressing this challenge may involve developing a domain-specific language (DSL) / meta rules to limit the rule space.
* Verifying the learned rules against ground truth rules in FOL/HOL is not straightforward. Due to the inherent complexity of these logics, the design of the verification process requires more careful consideration. Proving the equivalence of two FOL/HOL logical rules may necessitate the use of theorem provers (e.g., Coq), which requires considerable expertise. Beyond scalability, automating the verification process presents another challenge.
For example, one potential future work is to explore building specification/interpretation methods on the top of Neural Logic Machines (NLM) [1], which focuses on learning FOL rules with only nullary, unary, and binary predicates from data. However, unlike SATNet, which stores learned rules in a single $S$ (or $C$) matrix, NLM stores rules in the weights of neural networks (several MLPs). The optimization objective in NLM is not as straightforward as SATNet's, which can be expressed as a SDP. Additionally, interpreting MLPs is more challenging than interpreting a matrix alone. To decode interpretable rules from NLM, one may explore replacing MLPs with learnable matrices and transforming the entire neural architecture into a clear mathematical optimization form. By doing so, our similar specification methods may be applied to the revised NLM to derive interpretable rules effectively.
[1] Dong, Honghua, et al. "Neural Logic Machines." In International Conference on Learning Representations (ICLR), 2019
---
Rebuttal Comment 1.1:
Comment: Thank you for the response. I will maintain my score. | Summary: The authors show that interpreting the SATNet model is not reliable. They reformulate the SATNet objective to a 'maximum equality specification' on matrix $C$. Assuming $C$ is ternary instead of continuous, the objective can be interpreted as a MaxSat problem. This problem can then be verified. Furthermore, the new objective is amenable to injecting symbolic background knowledge.
Post rebuttal: I appreciate in-depth responses of the authors. I will keep my score.
Strengths: The paper is well-written and easy to follow. The structure is excellent, interleaving some experiments to motivate some choices. The problem of interpretability of learned rules is important and often overlooked in these logic-inspired architectures. I do not believe SATNet has been studied in this context before.
The paper presents interesting failure cases of SATNet. The symbolic background knowledge injection through compiling inside matrix $C$ is a very useful result, increasing the model's applicability.
Weaknesses: I have some questions about the theoretical ideas presented in the paper, particularly on the limitations of SATNet. The experiments are somewhat simple and could be presented more clearly or on more tasks (for instance, on ILP benchmarks like in $\partial$ILP [1]).
- The experiments could be a lot clearer with a table for the different tasks and the different evaluations. Ie, Compare SATNet and SATNet* as a continuous model and in the MaxSAT version.
- The benefits of the maximum equality specification for interpretability are not entirely apparent to me (see questions).
[1] Evans, R., & Grefenstette, E. (2018). Learning explanatory rules from noisy data. Journal of Artificial Intelligence Research, 61, 1-64.
Technical Quality: 2 fair
Clarity: 4 excellent
Questions for Authors: - Limitations of $S$ (p4, 145-155): The matrix given does not represent parity / XOR. I think coordinate (3, 4) should be a -1 (ie the last row is -1 -1 -1). Otherwise the (correct) input $110$ is rejected. On this corrected matrix $S$, I computed $S^T S$ and did not get a zero matrix as claimed but a diagonal matrix ($4 I$ in particular). This is not due to the correction, as $S_\top^T S_\top = 4$, so coordinate (1, 1) is necessarily 4.
- Same part: The claim is that since $S^T S$ collapses, SATNet cannot theoretically represent a range of logical rules. I think this claim should be made a bit stronger: What (I think) you want to show is that two non-semantically equivalent CNF formulas map to the same $S^T S$. For instance, is there another problem than parity for which $S^T S$ equals $4 I$?
- Have you tried interpreting the matrix $S$ instead when using the sparsification procedure? Instead of the MaxSAT procedure? Would that be worse than the MaxSAT interpretation?
- Experiments: What exactly makes SATNet* different from SATNet? The sparsification?
- 348: Why is SATNet* not evaluated on 9x9 sudokus?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 4 excellent
Contribution: 3 good
Limitations: While the authors developed a sparsification method, the MaxSAT problem will likely be huge and hard to interpret or verify. The verification part of this limitation is mentioned, not the interpretability.
The verification method amounts to enumerating IO Pairs. I would have hoped the interpretable rules would allow symbolic reasoning to prove equivalence. The authors argue this is impossible because it would require a transformation of SAT into MaxSAT (or vice-versa).
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the detailed and in-depth comments and questions! In the following, we hope to address the stated weaknesses and questions of our paper.
> The experiments are somewhat simple and could be presented more clearly or on more tasks like ILP benchmarks in $\partial$ILP.
We want to clarify that the setting of SATNet is quite different than that in ILP: SATNet aims to learn propositional logical rules directly from data without any predefined rules, while ILP requires background knowledge and predefined rules (or templates). Thus, applying SATNet to ILP tasks might not be fair for comparison due to the contrasting problem settings.
> Limitations of $S$ / The claim that SATNet cannot theoretically represent a range of logical rules.
We apologize for the typos in our paper and appreciate your attention to detail. The last row in the ground truth rules should be (-1, -1, -1), and the corresponding matrix $C = S^TS = 4 I$. However, what we are particularly concerned about are the off-diagonal elements in the matrix $C$. Due to the SDP formulation of SATNet, its optimization is not affected by the diagonal elements of $C$, as the calculations associated with these diagonal elements lead to constants (specifically, $v_i^Tv_i$ = 1). When all the off-diagonal elements of $S^TS$ are zero, the SDP becomes trivial, and SATNet fails to capture any meaningful logical rules. This issue also exhibits in other symmetric rules like $(\neg x \lor y) \land (x \lor \neg y)$. In our revision, we will provide a more detailed explanation of this aspect to enhance clarity.
> Have you tried interpreting the matrix instead when using the sparsification procedure?
We initially attempted several regularization methods to enforce $S$ to be a ternary matrix during the training process, but unfortunately, these approaches did not yield successful results.
> What exactly makes SATNet* different from SATNet? The sparsification?
We want to first emphasize that SATNet* and SATNet can be both used to learn rules from data and our specification methods (with exact inference) and verification methods can be applied to both of them. Here are two main technical differences between SATNet* and SATNet:
* Learning procedure: SATNet learns the $S$ matrix, while SATNet* considers the $C$ matrix as the parameters and directly learns the weights of $C$. This distinction leads to a reduction in both the number of parameters and hyperparameters in our model. Specifically, $S \in \mathbb{R}^{m \times n}$ and $C \in \mathbb{R}^{n \times n}$, where $n$ is the number of variables (including the auxiliary variables), and $m$ is the number of clauses. In many cases, $m > n$, which means that SATNet* may have fewer parameters than SATNet. Moreover, when setting hyperparameters, SATNet considers the number of clauses and auxiliary variables, while SATNet* only considers the number of auxiliary variables. This makes SATNet* a more convenient and practical choice for real-world applications.
* The Sparsification: The sparsification technique is another key distinction. It strikes a balance between the expressivity and the size of the learned logical rules. By employing sparsification, SATNet* can reduce the size of decoded logical rules in some cases, which in turn accelerates exact inference using Gurobi (with more than 10$\times$ speed up).
> Why is SATNet* not evaluated on 9 $\times$ 9 sudokus?
We did attempt to use SATNet* on 9 $\times$ 9 sudokus; however, the convergence rate of SATNet* was slower than SATNet in 100 epochs. Although SATNet* could indeed reduce the size of decoded logical rules, the number of clauses is still in the hundreds of thousands. As a result, we chose to use SATNet instead for $9 \times 9$ Sudokus. Besides 9 $\times$ 9 Sudoku datasets, we find that SATNet* could converge as quickly as SATNet and could effectively reduce the size of decoded rules.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for their comment.
I am not entirely sure I agree with the assessment of ILP there. ILP does not _require_ background knowledge, it is optional in most cases. From what I recall, SATNet also uses 'templates', in that it creates propositional CNFs. The problem settings thus seem pretty similar to me: Both aim to learn interpretable rules.
Thanks for the clarification on the example, and I am somewhat glad I did not make a calculation mistake!
About 9x9 sudokus: I would appreciate if the mentioned results on slow convergence are added for completeness to the paper. Also, why is the number of clauses in the 100s of thousands? That seems like more than I would have expected. Do you believe the slowness of SATNet* to come because it is not 'overparameterised', in a sense?
---
Reply to Comment 1.1.1:
Comment: Thank you for your quick and insightful feedback! We hope to address your comments in detail as follows:
> ILP does not require background knowledge, it is optional in most cases. From what I recall, SATNet also uses 'templates', in that it creates propositional CNFs.
While it is true that background knowledge is not always mandatory for ILP, tasks where ILP is employed typically integrate this knowledge in practice (e.g., on the datasets in $\partial$ ILP). Besides this, the main difference between ILP and SATNet is their rule structures. In ILP, human experts need to meticulously define rule templates & program templates *on the predicates* to constrain the rule space. Conversely, SATNet doesn’t demand such intricate templates for the underlying logic structures, focusing instead on learning the propositional logical rules *associated with the variables*. For SATNet, the only requirement is to set the shape of the $S$ matrix, particularly determining the number of variables, $n$ + $n_a$, and the number of clauses, $m$. Therefore, the settings and applications of ILP and SATNet are quite different.
> About 9x9 sudokus: I would appreciate it if the mentioned results on slow convergence are added for completeness to the paper.
We will incorporate these mentioned results into the revision of our paper.
> Why is the number of clauses in the 100s of thousands? That seems like more than I would have expected.
As stated in our paper, if $C$ is a dense matrix without zero elements, our specification method would result in a weighted MaxSAT formula with around $3 \cdot \binom{n+n_a+1}{2}$ clauses. In the case of $9 \times 9$ Sudoku, with a defined variable $n = 729$ and an auxiliary variable $n_a = 300$, the decoded weighted MaxSAT formula would yield such a massive number of clauses.
> Do you believe the slowness of SATNet* to come because it is not 'overparameterized', in a sense?
Yes, we believe that your raised point aligns with our thoughts and might be a reason regarding the performance of SATNet*. As stated in our paper, the sparsification procedure strikes a balance between the expressivity of SATNet* to learn the underlying rules and the size of the decoded rules. An interesting perspective of the results might be that SATNet, being 'overparameterized', effectively captures embedded rules within its parameters during training, whereas SATNet* falls short on $9 \times 9$ Sudoku puzzles.
We genuinely appreciate your feedback and are open to further discussions to enhance our work! | Summary: This paper builds on SATNet, a differentiable MaxSMT solver that was proposed in the past to learn logical rules from input-output examples. SANet is based on a "low rank semidefinite programming approach" and uses a learnable matrix S to capture the logical rules. SATNet was shown to learn to solve e.g. Sudoku puzzles with near perfect accuracy. The problem addressed in this paper is that the S matrix and the learned weights in SATNet lack a clear, logical meaning. The authors first describe a set of (failed) experiments that aim to extract the meaning from S. Then they describe a new formulation called maximum equality which enables them to formulate MaxSMT formulas that can be solved with off-the-shelf solvers. The authors demonstrate both theoretically and empirically that this new formulation indeed captures the underlying logical rules; it also opens up the possibility of new applications, such as adding domain specific constraints to a problem
Strengths: * The paper is very well written. It provides a gentle introduction to teh problem and a clear running example to illustrate the technique.
* The paper makes a clear contribution over the previous SATNet approach.The proposed approach is very interesting and has many practical applications.
* The novelty is not only in the "decoding" but also in teh functional comparison between the decoded rules and teh ground truth.
Weaknesses: none
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: none
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The paper describes carefully the limitations of the proposed approach.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks a lot for your positive comments and feedback! If you have further questions, we are happy to answer them! | null | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: The work investigates the problem of generating interpretable and verifiable logical rules through differential learning. Specifically, a deep network layer for satisfiability solving (SATNet) in a differentiable maximum satisfiability (MAXSAT) solver that learns logical rules from input-output examples was used. Through experiments, the authors claim that the learned weights in SATNet lack explainability. To address this issue, a method called maximum equality is proposed to interpretate the weights of SATNet into a set of propositional logical rules. Stream transformations and Sudoku problems were taken as the tasks to evaluate the method.
Post rebuttal: I have read the authors' rebuttal and I appreciate the authors' effort in addressing my concerns. I agree with the authors' point on interpretable logical rules in the rebuttal. My concerns on the application of the proposed methods to models beyond SATNet and other more complex tasks still hold.
Strengths: +The work proposes a method called maximum equality to interpretate the weights of SATNet into a set of propositional logical rules to improve the explainability of SATNet.
+The method was evaluated on the Stream transformations and Sudoku tasks and the results show that decoded rules from the maximum equality method are better than the ones out of the original SATNet.
+To a certain extent, it improves the explainability of learned logical rules from input-output examples.
Weaknesses: -The work builds upon a specific satisfiability solving method (SATNet), which limits the impact of the work.
-It is claimed that the lack of interpretation of SATNet was due to that the clause matrixS is not enforced as ternary and a method of rounding S to a ternary matrix didn't yield correct logical rules. However, it would be interesting to investigate how to interpretate S through a learnable network or another approach.
-The matrix C = S^T S is decomposed, and the objective of maximum equality is to minimize the similarity between each pair of variable vectors of S. A weight c_{ij} is associated with the similarity between each pair of variable vectors. It is not clear how c_{ij} will affect the generated logical rules.
-The work relies on the prior work SATNet and a few terms are not explained, which hinders understanding the work. For example, SATNet - satisfiability solving network; MaxSAT - maximum satisfiability (MAXSAT) solver.
-In Eq. 2 and Eq. 3, mimimize -> minimize
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: -Would it be feasible to use a deep neural network to interpret S of SATNet?
-Is c_{ij} predefined or learnable?
-How is an appropriate value set for n_a?
-The visual Sudoku problem was used in the SATNet work, have you tested your maximum equality method on this task as well?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 2 fair
Limitations: The authors have mentioned the limitation of this work in only learning propositional logical rules and not being able to handle rules in first-order or higher-order logic.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your comments and feedback. In the following, we will address your questions point by point.
> The work builds upon a specific satisfiability solving method (SATNet), which limits the impact of the work.
Indeed, SATNet is an award-winning architecture (ICML 2019 Best paper Honorable mention) that achieves state-of-the-art performance for learning *neural representations* of logical rules from data without using specified rule templates. While our approach builds upon SATNet, it is important to highlight that our work represents the *first* contribution in the logical rule learning/logic programming field, enabling the learning of *interpretable* and *verifiable* logical rules directly from data without any prior rule templates. By focusing on interpretability, our work expands the potential applications of differential logical rule learning that requires the transparent solving process and provides a novel perspective in this domain.
> It would be interesting to investigate how to interpret $S$ through a learnable network or another approach / Would it be feasible to use a deep neural network to interpret $S$ of SATNet?
You raise an interesting point about interpreting the $S$ matrix through a learnable network. However, it is essential to clarify that interpretable logical rules aim to be human-readable, enabling white-box problem-solving processes. While neural networks themselves are challenging to interpret, decoding reliable human-readable rules directly from the $S$ matrix using neural networks becomes even more complex. To our best knowledge, there is no existing learnable approach that can reliably interpret $S$ to generate a set of logical rules in an interpretable form. Our approach naturally aligns with the optimization process of SATNet and enables the *interchangeability* between weights and propositional logical rules, providing a viable and effective solution for decoding a set of interpretable rules.
> It is not clear how $c_{ij}$ will affect the generated logical rules / Is $c_{ij}$ predefined or learnable?
In SATNet*, the only learnable parameter is the $C$ matrix, which is fully learned from the data. Each element $c_{ij}$ contributes to the weights in our maximum equality formulation.
> The work relies on the prior work SATNet and a few terms are not explained, which hinders understanding the work.
Thanks for pointing out this issue and our typos. In the revised version of our paper, we will provide a comprehensive explanation of SATNet and clarify all relevant terms to ensure a better understanding for readers.
> How is an appropriate value set for n_a?
The value of $n_a$ is set differently for various problems. Generally, $n_a$ can be chosen around the number of defined variables ($n$). In our experiments, we set $n_a$ to the same value as the original SATNet paper to ensure a fair comparison.
> The visual Sudoku problem was used in the SATNet work, have you tested your maximum equality method on this task as well?
Thanks for the suggestions. We conduct additional experiments on 4 $\times$ 4 visual Sudoku datasets. Similar to the non-visual setting, using SATNet and SATNet* alone could achieve a solving accuracy of 99.83\% and 99.28\% respectively, whereas using exact solver Gurobi on our decoded rules could achieve 100\% accuracy. Moreover, we further verify that the decoded rules in this setting satisfy the *unique functional equivalence* as well.
---
Rebuttal Comment 1.1:
Title: Rebuttal acknowledged
Comment: I thank the authors for their replies.
---
Reply to Comment 1.1.1:
Comment: Thank you for the acknowledgment of reading our responses. We appreciate that the questions and concerns you raised in the original review are more constructive and additive to our technical contributions (rather than technical limitations) and hope that we have successfully addressed these points. If that's indeed the case, could you please consider raising your rating of the paper? If not, we would be happy to elaborate on any aspect you may still have questions. Thank you! | null | null | null | null | null | null |
D-CIPHER: Discovery of Closed-form Partial Differential Equations | Accept (poster) | Summary: The paper falls in the realm of data driven discovery of dynamical systems, PDEs to be specific. It proposes a framework for a class of PDEs which are termed as variation ready PDEs. These are claimed to be less restrictive than existing methods, which make stronger assumptions on the form of the PDE to be discovered and hence can't recover a significant population. A new optimization scheme, novel loss function and empirical evidence is provided to support the claims made in the paper.
Strengths: In general the paper is well written, the scope of the paper is well thought.
Notations are clear and introduced properly.
It address an important problem, the clear listing of challenges in the introduction is particularly impressive.
The variational loss function is novel
Weaknesses: My major concern is regarding the way the problem is setup before the solution is proposed. Some of the terms introduced here although intuitive lack enough insight to make them more convincing.
Section 7 is too short in the main paper to determine any novelty in the optimization scheme, this definitely needs to be presented better in the main paper, as this is claimed as a contribution.
One of the aspects mentioned in the paper is that of robustness from noisy or infrequent observations, I don't have see any evidence to support such claims.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1) Can authors discuss the effect of the choice of test functions as B-splines? What about other options, how do they impact the results. A discussion on this ground would be helpful.
2) Can authors shed light on the robustness of the proposed framework with respect to noisy and/or infrequent observations. These if theoretical can be taking into account sampling frequency, SNR, etc.
3) How about parameterizing the test functions and making them learnable? Has this been tried, if not what do the authors feel in this regard. While test functions are fairly restrictive, there might still be a way to learn then or at the very least choosing them from a dictionary.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: I don't see direct potential negative social impact. However, since this is a paper in the direction of ML for science and adjacent domains, it is quite possible that these methods when fully developed will have major impact. It will be good to have a word of caution regarding this, to make sure we acknowledge the vast amount of domain expertise already available in all scientific fields and not use ML as a tool to replace all human knowledge.
Authors have acknowledged technical limitations appropriately.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear Reviewer ubhm,
We appreciate your thorough feedback and encouraging comments. We summarize the improvements we have made to the paper based on your review and we answer your questions below.
### Actions taken
1. Elaborated on the requirements and other choices of testing functions in Appendix C.4.
2. Provided more details of CoLLie in Section 7 and revised the contributions section.
3. Added a discussion on the broader impact in Appendix F.
### Problem setup
We are glad that you found the terms we define in Section 3 intuitive. We elaborate on their significance in Appendix F.7.
Table 10, in particular, provides a concise summary of how the *evolution assumption* and the *linear combination* assumption impacts the optimization problem. As we explain in Section 3, we introduce *derivative-bound* and *derivative-free* parts of a PDE because they allow relaxing the two previously mentioned assumptions while still allowing for variational formulation (as not every PDE admits a variational formulation). This comes from a crucial observation that *no additional constraints* need to be put on the terms without derivatives to admit variational formulation.
As we write in Appendix F.7, we believe the new notions complement the standard recognized PDE classes such as semi-linear, quasilinear, hyperbolic, etc. These standard classes were introduced predominantly to characterize the solving techniques or the properties of the solutions, whereas the notions we introduce relate to the difficulty of discovering such equations from data. Therefore we believe they offer substantial insight into this area.
### CoLLie
We have added the following description of CoLLie to Section 7 and we modified the wording of the contribution section to indicate that CoLLie is a relatively minor contribution, compared with the main contribution of the paper, i.e., introducing new notions to describe PDEs from a discovery perspective, proposing a new general class of PDEs, a novel objective function, and a discovery algorithm.
**Added to Section 7:**
We observe that this optimization problem is related to the one encountered in LASSO. Denote $\mathbf{z}\_0$ the solution that minimizes $||\mathbf{A}\mathbf{z}-\mathbf{b}||\_2^2$ (no constraints). If $||\mathbf{z}\_0|| > 1$ the problem is equivalent to finding $\lambda$ (in the Lagrangian form of LASSO, Equation 48) such that the LASSO solution has the norm 1. Least Angle Regression (LARS) is a popular algorithm used to minimize the LASSO objective that computes complete solution paths. These paths show how the coefficients of the solution change as $\lambda$ moves from $0$ to $\lambda_{max}$ (from no constraints to effectively imposing the solution to be $\mathbf{0}$). See Figure 6 in Appendix D2. CoLLie uses these solution paths to calculate the exact solution to the optimization problem. The case $0 < ||\mathbf{z}_0|| < 1$ is harder as it corresponds to $\lambda < 0$. CoLLie addresses this challenge by *extending* the solution paths generated by LARS beyond $\lambda=0$ for $\lambda<0$. We assume that the paths continue to be piecewise linear and keep their slope (Fig. 6, App. D2). CoLLie then uses these assumptions to efficiently find an approximate solution. We provide a detailed description of CoLLie in Appendix D.
### Robustness to noisy and infrequent observations
As we show in Appendix F.5, estimating the derivatives is challenging and it becomes more difficult the higher the order of the derivative (see Figure 10). We thus show empirically that our method, which circumvents derivative estimation, performs better than the alternatives in scenarios with varying noise levels (Figure 2, Figure 3, Figure 4, Table 3, Table 8). We also show the impact of the frequency of observations and the number of samples in Figure 3. In all scenarios, our method is more robust than the alternative. We discuss the challenge of establishing theoretical error bounds in Appendix F.9.
### Testing functions
Testing functions should satisfy the following conditions:
1. Be sufficiently smooth (at least $\mathcal{C}^K$ for a $K^{th}$ order PDE)
2. Compact support
3. Derivatives can be computed analytically
4. Orthonormal
Conditions 1 and 2 follow directly from Definition 3. Condition 3 is necessary because we do not want to estimate the derivatives of the testing functions. Condition 4 follows from the result obtained in [1] that suggests that these functions should be a subset of an orthonormal basis of L2 space.
Other testing functions are possible and examining them constitutes an interesting research direction. In particular, piecewise polynomials as defined in [2] or various wavelets. Ideally, we would like to choose wavelets that form an orthonormal basis for the L2 space such as
- Shannon wavelets - smooth ($\mathcal{C}^{\infty}$) but not compact
- Meyer wavelets - smooth ($\mathcal{C}^{\infty}$) but not compact (better rate of decay than Shannon)
- Daubechies wavelets - smooth ($\mathcal{C}^{K}$ for a specified $K$) and compact but they do not have a closed-form expression.
Another interesting avenue of research would be to adapt the testing functions to the input data.
### Impact
We have added a section that acknowledges that D-CIPHER is not designed to or capable of replacing human experts in scientific discovery and should be employed as a support tool as a part of a much broader scientific process.
**References**
1. Qian, Z., Kacprzyk, K. & van der Schaar, M. D-CODE: Discovering Closed-form ODEs from Observed Trajectories. ICLR (2022).
2. Messenger, D. A. & Bortz, D. M. Weak SINDy for partial differential equations. Journal of Computational Physics (2021).
Thank you once again for dedicating your time to reviewing our paper. We hope our responses have been satisfactory in addressing your queries. If any aspects still require additional explanation or if you have further questions, please let us know. We are eager to provide the necessary explanations.
---
Rebuttal Comment 1.1:
Comment: Thank you for the detailed response, and proposed update. I will keep my score.
---
Reply to Comment 1.1.1:
Comment: Dear Reviewer ubhm,
We want to express our sincere gratitude for the time and effort you dedicated to evaluating our paper and the rebuttal. We are pleased by your continued positive assessment of our work. Your insightful comments have undeniably contributed to the enhancement of our paper.
Kind regards,
Authors of Submission13865 | Summary: This paper proposes a framework (D-CIPHER) to discover closed-form PDEs and ODEs. The framework is more general than some of the previously existing methods, and in particular can handle a class of PDEs defined as variational-ready PDEs in the paper. The empirical experiments evaluated the discovery performance on synthetic data for a set of different equations (showing both comparisons with existing methods on discovering linear combinations and results on discovering more challenging equations).
Strengths: Originality: From what I could tell, this paper proposes an original framework to discover broader classes of PDEs and ODEs. That being said, I'm not familiar with the literature in the area so I cannot fully speak to the originality.
Quality: The empirical experiments, even though synthetic, appear to be thoughtfully designed and demonstrate notable improvements.
Clarity: This paper is very well-written with a good balance of technical details and general introductions. I enjoyed reading it even as someone outside of the ODE/PDE field.
Weaknesses:
Significance: This is probably my biggest question for the paper. What types of real-world scenarios could D-CIPHER be applied to? The Discussion section briefly mentions finding heat and vibration sources and discovering population models and epidemiological models. However, at the level of the current discussion, these applications all sound very abstract. The paper would benefit from more grounding in concrete applications. If it's possible to add in experiments on real data, that would strengthen the paper. Even if not, the paper would still be improved with a more extensive discussion on how D-CIPHER could be applied in each of the relevant scenarios. For example, what would the form be (in step 1) based on prior knowledge? How would the fields be estimate (step 2)?
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: See "weaknesses"
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 2 fair
Limitations: The very last paragraph discusses some potential limitations, but in my opinion it would be very helpful to have some "negative examples" in the paper, i.e. synthetic experiments where D-CIPHER fails and to explore the reasons of failure.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear Reviewer Znhz,
We appreciate your thorough feedback and encouraging comments. We are particularly grateful for your suggestion on discussing how D-CIPHER can be applied in real-world scenarios. As a result, we have added two sections to Appendix F that discuss it in more detail. We firmly believe that these additions enhance the paper's overall quality and improve the accessibility of our approach for users.
### Actions taken
We have added the following two sections to Appendix F:
1. Impact on real-world problems
2. D-CIPHER in practice.
### Impact on real-world problems
D-CIPHER is especially useful in discovering governing equations for systems with more than one independent variable. For instance, spatiotemporal data or temporal data structured by age or size. In particular, we envision D-CIPHER to be useful in modeling spatiotemporal physical systems, population models, and epidemiological models.
**Spatiotemporal physical systems.** D-CIPHER may prove useful in discovering equations governing the oceans or the atmosphere. For instance, some places actively add or remove CO2 from the atmosphere. These “sources” and “sinks” are likely to be described by a $\partial$-free part which D-CIPHER is specially equipped to discover. Similarly, with the ocean temperature where $\partial$-free part can describe a heat source. Another area of application can be modeling seismic waves across the earth’s crust. Here the $\partial$-free part can describe the vibration source (e.g., an earthquake).
**Population models.** Population models can be used in agriculture to determine the harvest or for pest control to predict their impact on the crop. They have also been used in environmental conservation to model the population of endangered species. Population models have also been used in modeling the growth of cells to better understand tumor growth. Moreover, understanding the evolution of a population pyramid for a specific country may prove invaluable in ensuring its economic stability. As in all these scenarios, the rates of growth and mortality are likely to be described by $\partial$-free part, D-CIPHER is uniquely positioned to discover such equations as an aid to human experts.
**Epidemiological models.** Epidemiological models are crucial during a pandemic for better planning and interventions. For many diseases the rates of mortality and infection are age-dependent. Thus modeling the spread of disease using PDEs (rather than ODEs) might provide superior results.
### D-CIPHER in practice
We have added a section to Appendix F discussing guidelines and things to consider while using D-CIPHER. In particular, it discusses the following points (a more detailed discussion is provided in the appendix).
**The order of the differential equation.** One of the first considerations should be to choose the order of the differential equation $K$. For many dynamical systems, $K=2$ is sufficient unless we expect very complicated behavior. Then considering $K=3$ or even $K=4$ may be warranted. Note, that we show that D-CIPHER can discover a fourth-order PDE (Kuramoto-Sivashinsky equation) in Section 8.1.
**Homogeneous equations.** Before searching through the whole space of closed-form $g$ (derivative-free parts), we can consider whether the equation we want to discover may be homogeneous. These experiments on the restricted search space can provide quick insights before searching through all closed-form derivative-free parts.
**Terms in a dictionary.** For a given order of a differential equation $K$, it is a good idea to include all standard differential operators up to order $K$ acting on all the variables. For instance, for $K=2$ and $M=1+1$ we could choose $\mathcal{Q} = (\partial_t u, \partial_x u, \partial_t \partial_x u, \partial_t^2 u, \partial_x^2 u)$. That allows to cover all linear PDEs with constant coefficients up to that order. To allow for non-linear PDEs we can include a term like $\partial_t{u^2} = 2 u \partial_t u$ that often describes advection (as in Burger's equation).
**Dictionary steering when dealing with many dependent variables.** When we deal with a system of PDEs rather than a single PDE choosing a dictionary is increasingly important. As we explain in Appendix F.2, discovering whole systems of PDEs is very challenging and D-CIPHER is not designed to do so out of the box. However, we show how that can be done in certain situations. We can steer what kind of equations are discovered by choosing the terms in the dictionary.
**Estimation algorithm.** Estimation algorithms make different assumptions on the data-generating process and should be chosen based on domain expertise. As we show in Appendix F.10, algorithms that produce smoother functions, such as Gaussian Process regression and cubic spline interpolation, tend to have good results. We can consider the advantages and disadvantages of these methods. For instance, Gaussian Process regression works very well for smooth signals. However, it is computationally intensive and might not perform well if the signal is not smooth enough (it has abrupt changes). Spline interpolation, on the other hand, is faster and more appropriate for less smooth signals, but it might introduce certain unwanted artifacts because of using cubic polynomials to interpolate the data.
### Failure modes
We note that we not only discuss the *potential* limitations but actually show some of these limitations in our experiments. In particular, Figure 3 shows how D-CIPHER fails to discover the target equations if
1. The noise becomes too high
2. The sampling interval is too large
3. The number of samples is too low
Thank you once again for dedicating your time to reviewing our paper. We hope our responses have been satisfactory in addressing your queries. If any aspects still require additional explanation or if you have further questions, please let us know. We are eager to provide the necessary explanations. | Summary: The paper proposes a new way of discovering closed-form Partial Differential Equations (PDEs) from data. This especially aims at high-order PDEs, especially when the specific form is not pre-assumed and there is a lack of observations on derivatives. The key idea is to represent the unknown PDE with terms that are bounded by derivatives and terms that are not, so that the latter kind can be easily and reliably estimated from data, hence the ground-truth, while the first kind can be estimated by leveraging symbolic regression.
Several synthetic datasets are employed for evaluation, many of which are simulated from equations that do not satisfy the linear combination assumption made by existing work.
Strengths: Strengths:
1. A new framework for discovering PDEs, especially the ones with high-order derivatives and a lack of direct observations on the derivatives.
2. A comprehensive evaluation on many different PDEs satisfying and beyond the assumption of PDEs made by previous work.
3. The comparison seems to show a better performance.
4. Good exposition. The paper has a good balance between background and technical details.
Weaknesses: I am in general in favour of this paper. However, it is not my area of expertise, so it would be good if they authors could clarify some questions here.
1. Reliance on symbolic regression. I wonder to what extent the proposed framework has to rely on symbolic regression. This opens up several questions. (1) How inclusive or comprehensive does the dictionary has to be? What if some key derivatives are not present in the dictionary? (2) can the authors provide more details on the computational time in addition to E.2? It would be good to show the computation time in F.3, when the dictionary is gradually increased
2. Comparison. It is arguably true that derivatives are hard to measure in applications. But since the experiments are done using synthetic data, I wonder what if the observations of derivatives are available? Will the proposed method still outperform existing methods? I would assume the proposed framework still works to some extent, although not being able to make use of the observed derivatives.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: Please see my 'Weaknesses' section above.
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations: The authors mentioned limitations. But in real-world scenario, there are other factors that might make applying this framework difficult. The first one is the sparsity of observations. The sensors are normally not well distributed and sometimes extremely sparse. So the estimate based on the zero-order information might not be reliable to start with. The second is the type of noise, which is normally unknown and needs to be estimated with the data together.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear Reviewer kC9e,
We appreciate your thorough feedback and kind remarks. Your questions have undoubtedly strengthened our work. In particular, we want to thank you for your inquiry about taking advantage of the derivative data should it be available. As D-CIPHER can in fact very naturally use such additional information, we have added a section in the appendix explaining how this can be achieved. We summarize the improvements we have made to the paper based on your review and we answer your questions below.
### Actions taken
We have added the following sections to the appendix:
1. Section "How comprehensive does the dictionary $\mathcal{Q}$ need to be?" in Appendix F.
2. Plot of the computation time as the dictionary is gradually increased in Appendix F.3
3. Section "How to take advantage of the observed derivatives?" in Appendix F
### Reliance on symbolic regression
We rely on the symbolic regression algorithm to find a closed-form derivative-free part. If the equation is suspected to have a derivative-free part equal to 0 or is linear in $u$ then the symbolic regression algorithm is not necessary and we can just add the term $u$ to the dictionary $\mathcal{Q}$ (as a 0$^{\text{th}}$ order derivative). This, however, only works in scenarios when the target equation is in a linear combination form. To discover anything more complicated, we need to rely on a symbolic regression algorithm.
### How comprehensive does the dictionary $\mathcal{Q}$ need to be?
We have attached a table (Table 1 in the attached PDF) that shows 17 different differential equations and what terms they need in a dictionary. All of them can be discovered with a dictionary of size of 10, $\mathcal{Q} = (\partial_t u, \partial_x u, \partial_t \partial_x u, \partial_t^2 u, \partial_x^2 u, \partial_t \partial_x^2 u, \partial_x^3 u, \partial_x^4 u, \partial_x(u^2), \partial_x^2 (u^2))$, and 9 of those equations can be described with a dictionary containing just 4 terms, $\mathcal{Q} = (\partial_t u, \partial_x u, \partial_t^2 u, \partial_x^2 u)$. It is thus likely that such small dictionaries are sufficient to discover most of the well-known equations. Many differential equations have similar derivative-bound parts and differ by derivative-free parts. Being able to discover *any closed-form derivative-free part* is what makes D-CIPHER stand out.
### Computation time when the dictionary is gradually increased
As shown in Figure 1, CoLLie's computation time does not increase significantly when the dimensionality is increased. We have added in Appendix F.3 a plot that shows how the computation time increases when the dictionary is gradually increased. The plot can be seen in the attached PDF in Figure 1.
### How to take advantage of the observed derivatives
D-CIPHER can make use of the observed derivatives (if they are available) by adapting the dictionary. Consider a setting with a dictionary $\mathcal{Q} = (\partial_t u, \partial_x u, \partial_t \partial_x u, \partial_t^2 u, \partial_x^2 u)$. If we happen to have the measurements of $\partial_t u$ then we can introduce a new variable $v = \partial_t u$ and change the dictionary to $\mathcal{Q} = (v, \partial_x u, \partial_x v, \partial_t v, \partial_x^2 u)$. Note, we have performed experiments where the dictionary contains more than one dependent variable in Appendix F.2. With observed derivatives, we can also enlarge the space of Variational-Ready PDEs by allowing $g$ ($\partial$-free part) to depend on $v$ as well.
Thank you once again for dedicating your time to reviewing our paper. We hope our responses have been satisfactory in addressing your queries. If any aspects still require additional explanation or if you have further questions, please let us know. We are eager to provide the necessary explanations.
---
Rebuttal Comment 1.1:
Title: Rebuttal clarifies my questions
Comment: Thanks for the detailed responses and the added content. I will keep my confidence low as this is not my area of expertise but I am happy to see this paper accepted.
---
Reply to Comment 1.1.1:
Comment: Dear Reviewer kC9e,
We appreciate your time invested in assessing our paper and the rebuttal. We are delighted to see that you reaffirmed your acceptance of our work! Your constructive comments have played a significant role in enhancing the quality of our paper.
Kind regards,
Authors of Submission13865 | null | null | Rebuttal 1:
Rebuttal: ### Additional PDF
Table 1 in the attached PDF shows 17 different differential equations and what terms they need in a dictionary.
Figure 1 in the attached PDF shows how the computation time increases when the dictionary is gradually increased.
Pdf: /pdf/f6cf6649829fd862cc88c00944261f0d77b34b23.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Energy-based learning algorithms for analog computing: a comparative study | Accept (poster) | Summary: This paper aims to investigate existing Energy-based learning algorithms on equal footing with same models and datasets. Energy-based algorithms include contrastive learning (CL), equilibrium propagation (EP) and coupled learning (CpL) have been carried out for comparison. The experiments conducted based on deep Hopfield networks (DHNs) show that the centered variant of EP is the best-performing algorithm.
Strengths: This work provides systematic comparison of the different energy-based learning algorithms. The idea is meaningful and the presentation is satisfying.
Weaknesses: it does not present new solutions for efficient energy design/ learning, the generalizability of the comparison (not limited on DCHN) should be further discussed.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. In this paper, what’s the reason that deep convolutional Hopfield network (DCHN) is suitable for generic comparison? The authors should present detailed illustrations.
2. The experimental results show that equilibrium propagation performs better than coupled learning. Does this conclusion hold true for large-scale data training? The authors need to present deeper analyses or proofs.
3. It seems that this paper does not provide new solutions for efficient energy design/ learning. According to the conclusion presented in this paper, how can we design powerful EBL models with more complex network architectures?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: See weaknesses.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We have explained in the common rebuttal the novelties in our work. In particular, our work introduces a novel asynchronous update scheme for accelerating the convergence of the energy-minimization process in DCHNs. This asynchronous update scheme allows us to achieve a 13.5x speedup compared to Laborieux et al. (2021) in our overall simulations. (We refer to Appendix C in the supplementary material for details.)
We have also clarified in the common rebuttal why we chose DCHNs for comparing the different training algorithms. Among the energy-based architectures compatible with analog computing, we note that DCHNs achieve the highest reported accuracies for energy-based learning algorithms on image-recognition tasks, which is the focus of this work. While other architectures have not been studied, we believe that DCHNs are the best choice for this reason.
We agree that there is no obvious reason why DCHNs should be generic for the comparison of EBL algorithms, and that we have no evidence that our findings would generalize beyond this setting. We will explain in the conclusion section that our experimental study is limited to DCHNs and that our findings might not hold in the context of other network models. However, we note that our Theorems 2 and 3 indicate that our findings are likely to generalize beyond the setting of DCHNs.
Finally, while there is no certainty that our findings would hold on large-scale data, we emphasize that our comparative study is the very first such study for comparing EBL algorithms.
---
Rebuttal Comment 1.1:
Title: Engaging in a discussion with Reviewer AFqD
Comment: Dear Reviewer AFqD,
Thank you for your time in reviewing our work. As the discussion period is on-going, we would be happy to address any remaining question you may have in the light of our rebuttal.
We understand from your review that you raised concerns about:
(1) the novelty of our work
(2) the choice of DCHNs for comparing the seven learning algorithms
(3) the generalization of our empirical findings to other models and large scale data
We believe that we have addressed these three points in our response from August 9. For your reference, here is a summary of our response from August 9 that addresses these three points:
(1) Our work introduces a novel energy minimization algorithm (the “asynchronous update procedure”, which yields a 13.5x speedup compared to Laborieux et al, 2021), and three novel energy-based learning algorithms, namely NEP, NCpL and CCpL. Our study revealed that our NEP algorithm performs much better than the original PEP algorithm introduced by Scellier and Bengio (2017), and often performs as well as the CEP algorithm introduced by Laborieux et al (2021). Similarly, our study revealed that both our CCpL and NCpL algorithms perform better than the original PCpL algorithm introduced by Stern et al (2021). Kindly note that after reading our rebuttal, Reviewer dBi8, who shared similar concerns as yours regarding the novelty of our work, increased his/her score by recommending us to "emphasiz[e] that NEP, NCpL and CCpL are novel contributions of this work more prominently in revisions" and "agree[s] that these are useful and novel variants which yield counter-intuitive insights into EBL learning".
(2) Our work is focused on models relying on local update rules, both for the layers’ activation and for the weights. Our motivation is that such models are promising for the development of low-power hardware for AI (analog chips, as opposed to digital chips such as GPUs). Our choice to use DCHNs for comparing the seven energy-based learning algorithms is that, among the energy-based models relying on local update rules, DCHNs are to date the best performing architectures.
(3) While we have no proof that our empirical findings will generalize beyond DCHNs and to larger datasets, our novel Theorems 2 and 3 tell us that this is likely to be the case.
We thank you again for your time and remain at your disposal for any further questions you may have.
---
Rebuttal Comment 1.2:
Title: Rebuttal Response
Comment: Thanks for the author’s response to my questions. In the rebuttal stage, the authors have addressed most of my concerns and clarify the contribution of this work. This paper is well-written and organized, but the contribution is not very significant, so I will keep my rating score.
---
Reply to Comment 1.2.1:
Comment: Dear Reviewer AFqD,
Thank you for your positive comment about the quality of the writing and of the organization of the paper. | Summary: The paper focuses on exploring and comparing various energy-based training methods for deep convolutional Hopfield networks. The performance of contrastive learning, positively-perturbed, negatively-perturbed, and centered versions of Equilibrium propagation and Coupled learning algorithms are evaluated. The paper establishes state-of-the-art (SOTA) results for some of these algorithms.
Strengths: The set of experiments conducted in this paper provides a comprehensive comparison of the performance of different training methods on a variety of datasets. This extensive evaluation contributes to a robust understanding of the performance of deep convolutional Hopfield networks (DCHN) and their applicability across different datasets.
Weaknesses: 1) One issue with this paper is the lack of citation and discussion of recent works on Energy Based Models (EBMs) that are relevant to the topic, such as Grathwohl et al. (2019), Nijkamp et al. (2019b;a), and Du & Mordatch (2019). These papers analyze the application of EBMs to image classification tasks, propose different training techniques, and compare their performance. It would be beneficial to include a discussion of these works and how they relate to the research presented in this paper.
2) Subsection 2.5, which presents the theoretical results, appears disconnected from the main body of the paper. There is no discussion of the implications of these theorems or how they relate to the research findings. It is important to establish a clear connection between this theoretical section and the rest of the paper to provide a cohesive narrative.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: What factors motivated the authors to choose the Deep Convolutional Hopfield Network (DCHN) model as the framework for comparing the training methods in the paper?
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 2 fair
Limitations: The paper does not include a discussion on the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: First, we thank the reviewer for their remark on recent works on energy-based models (EBMs). We understand that the term “energy-based model” is ambiguous and refers to different lines of works with very different motivations, which we would like to clarify here. For clarity, we start by highlighting the common points between our work and the referenced works, and then we discuss the differences and implications of these differences.
Both in our work and in the referenced works, the model is defined by an energy function, which is a scalar function of the weights and activations of the network. Inference and training in these EBMs involve minimizing this energy function in the activation space. For instance, in Grathwohl et al (2020), they perform gradient descent in the input space:
$$ x \leftarrow x - \alpha \partial_x E(f_\theta(x)) $$
where $f_θ(x)$ represents the model logits given an input $x$. Similarly, in our work we minimize an energy function (the Hopfield energy) thanks to our “asynchronous update procedure” in the activation space.
There is, however, a crucial difference between our work and the referenced works. Our work is concerned with models in which inference and learning are achieved using local update rules, with the long-term motivation of building new, highly-efficient (low-power), hardware for AI. The whole purpose of our line of work is to totally obviate the use of the backpropagation algorithm and perform model optimization (inference and learning) by leveraging locally computed quantities, with the longer term goal in mind to design energy-efficient processors dedicated to model optimization.
In our work, the asynchronous update procedure to minimize the energy function of a Hopfield network reads:
$$ s_i \leftarrow \sum_j w_{ij} s_j + b_i $$
and the learning rule of CL/EP/CpL in such Hopfield networks reads:
$$\Delta w_{ij} \propto \left( s_i^{\rm perturbed} s_j^{\rm perturbed} - s_i^{\rm free} s_j^{\rm free} \right)$$
Importantly, both the energy minimization procedure and the learning rule require solely locally available information to update the state variables $s_i$ and weights $w_{ij}$. This feature makes our model amenable to highly efficient implementation on dedicated (analog) hardware. Yi et al (2023) have shown that such Hopfield networks can be built in memristive networks (analog networks) and trained using 10,000x less energy compared to DNNs trained on GPUs.
In the referenced works, the motivation is very different from ours. For example, Grathwohl et al (2020) aim to scale EBM training to build large, well-calibrated and adversarially robust discriminative and generative models. Their algorithms do not preclude the use of the backpropagation algorithm, which they use not only for parameter gradient computation, but also to run stochastic gradient Langevin dynamics. Because they make use of the backpropagation algorithm, which in turn requires that their models runs on digital processors such as GPUs, it is unclear if their model could be useful to build low-power hardware for AI, which is the motivation of our work.
To avoid the confusion between the EBMs of the referenced works and the models considered in our work, we propose to change the name of our approach to “energy-driven models” instead of “energy-based models”, and to change the title to “Energy-driven learning algorithms for analog computing: a comparative study”. We will also explain in the introduction section that the energy-driven approach of our work differs from this other line of work on energy-based models. We hope that this clarifies the motivation of our work.
Second, we respectfully disagree with the reviewer’s remark that “Section 2.5, which presents the theoretical results, appears disconnected from the main body of the paper”. We discuss the implications of our theorems and how they relate to the research findings in the discussion section (section 4.2 of the paper). For example, we write: “algorithms employing a positive perturbation (P-EP and P-CpL) perform significantly worse than those employing a negative perturbation (N-EP and N-CpL). [...] Theorem 2 sheds light on this observation: N-EP optimizes an upper bound of the cost function, whereas P-EP optimizes a lower bound”. As pointed out by reviewer dBi8, “Theorems 2 and 3 very nicely corroborate the empirical observations”.
Finally, our motivation for choosing DCHNs for comparing the different learning algorithms is that DCHNs are suitable for building low-power AI. As explained in the common rebuttal (see above), among all energy-based architectures compatible with analog hardware, DCHNs are the best-performing models to date.
Reference:
Yi, S. I., Kendall, J. D., Williams, R. S., & Kumar, S. (2023). Activity-difference training of deep neural networks using memristor crossbars. Nature Electronics, 6(1), 45-51.
---
Rebuttal Comment 1.1:
Comment: I appreciate the detailed clarification provided by the authors. It has come to my understanding that a misconception on my part regarding the terminology "energy-based models" has contributed to the confusion. I strongly recommend that the changes to the paper's title and the introduction, as outlined in the rebuttal, be incorporated into the final version of the manuscript. I have opted to enhance my rating by 2 points, resulting in a revised score of 5.
---
Reply to Comment 1.1.1:
Comment: Dear Reviewer 5nrK,
Many thanks for acknowledging the differences between the scope of our work (energy-based algorithms relying on local update rules) and the literature on EBMs that relies on backpropagation, and for increasing your score. In case of acceptance, we will amend the title and the introduction to emphasize these differences, as suggested.
---
Rebuttal 2:
Title: We would like to engage in a constructive discussion about our submission
Comment: Dear Reviewer 5nrK,
Thank you for your time. As the authors/reviewers discussion period is on-going, we would be very grateful if you gave us the opportunity to engage in a constructive discussion about our submission. From your review, we understand that you raised concerns about:
(1) our theoretical section
(2) the relation of our work to other works on energy-based models
(3) the choice of DCHNs for comparing the seven learning algorithms
We believe that we addressed your concerns in our response from August 9. Here is a summary of our response:
(1) Our theoretical results (Theorems 2 and 3) corroborate our empirical findings: NEP outperforms PEP, and CEP is the best performing algorithm. Reviewer dBi8 also noted that our theoretical results nicely corroborate our empirical results.
(2) The motivation of our work is that we are interested in local update rules (both for the layers’ activations and for the weights) for analog computing. This motivation is explained in the abstract and the introduction section of our manuscript, and also noted by Reviewer JWNX: “Investigating energy-based learning algorithms is interesting, especially for their compatibility with analog (post-digital) hardware.” We have clarified that models such as the one of Grathwohl et al (2020) differ from the motivation of our work because they rely on global differentiation (backpropagation) rather than locally computed quantities, and therefore are not directly connected to our line of works.
(3) Our choice to use DCHNs for comparing the seven learning algorithms is that, among the models relying on local update rules, DCHNs are to date the best performing architectures.
From these clarifications, we would like to ask if you still have other doubts based on which you recommend rejection?
We thank you again for your time. | Summary: This work conducts an extensive comparison of several energy-based learning (EBL) algorithms, including contrastive learning (CL), equilibrium propagation (EP) and coupled learning (CpL). Depending on the type of perturbation used, 9 variants of EP and CpL are examined. Deep Hopfield networks (DHNs) on five vision tasks (MNIST, F-MNIST, SVHN, CIFAR-10 and CIFAR-100) are trained, and different EBL algorithms are compared.
Strengths: Investigating energy-based learning algorithms is interesting, especially for their compatibility with analog (post-digital) hardware. The paper is generally well-written. The concepts and algorithms are mostly clearly presented. Experiments are detailed with good analysis.
Weaknesses: Not clear that Theorem 1-3 are new, or known from existing literatures.
The paper is good for a comparison study of different EBL algorithms, but the algorithm contribution is limited.
The performance of EBL-trained DCHNs on vision tasks (e.g., CIFAR-10) are far behind modern neural networks. It would be better to add such information and comparison.
https://paperswithcode.com/sota/image-classification-on-cifar-10
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: see above.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 3 good
Limitations: see above.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Regarding the question on the novelty of the theorems, we would like to clarify that Theorem 1 isn’t new ; it is presented and proved e.g. in Movellan (1991).
On the other hand, Theorems 2 and 3 are new in the literature: no prior work had shown that EP (resp. CEP) performs gradient descent on the surrogate loss function L_beta (resp. L_{-beta,+beta}).
We have explained in the common rebuttal the algorithmic novelties in our work. While the network architecture (the deep convolutional Hopfield network, DCHN) isn’t novel, we introduce in our work three novel EBL algorithms (NEP, NCpL and CCpL, to compute the parameter gradients) and a novel energy minimization algorithm (the “asynchronous update method” to compute the equilibrium states of DCHNs). Importantly, our novel energy minimization algorithm leads to a 13.5x speedup compared to Laborieux et al. (2021).
Finally, we agree that the performance of DCHNs on the tasks considered in this work lags behind that of SOTA deep learning models. We propose to report the results of SOTA algorithms for comparison, as suggested by the reviewer.
---
Rebuttal Comment 1.1:
Title: Engaging in a discussion with Reviewer JWNX
Comment: Dear Reviewer JWNX,
Thank you again for the time already spent on reviewing our work. As the discussion period is on-going, we would be very grateful to be given the opportunity to address any remaining concern you may have after reading our rebuttal. Kindly note that after reading our rebuttal, Reviewer dBi8, who shared similar concerns as yours, decided to raise his/her score by recommending us to "emphasiz[e] that NEP, NCpL and CCpL are novel contributions of this work more prominently in revisions" and "agree[s] that these are useful and novel variants which yield counter-intuitive insights into EBL learning". Let us know if we can further complete our answer. | Summary: This work reviews and compares recent EBL methods on classic image classification benchmarks. The paper first reviews a variety of EBL methods including CL, EP, P-EP, N-EP, C-EP, CpL, P-CpL, N-CpL, and C-CpL. Next, two theorems are presented which show that P/N/C EP approximate gradient descent on the cost function. The experiment section presents a large-scale comparison of the EBL methods along with Backprop methods for image classification on image datasets whose complexity ranges from MNIST to CIFAR-100. The study finds that N-type EBL variants outperform P-type EBL variants, and that C-EP performs best overall. The study also finds that MNIST is not a suitable dataset for identifying differences between method outcomes, and that more complex datasets like CIFAR-100 reveal important differences. A discussion of possible explanations for the results is presented.
Strengths: * The paper is a very useful resource for understanding recent EBL methods.
* Theorems 2 and 3 very nicely corroborate the empirical observations about the effectiveness of P-type over N-type, and the overall superiority of C-type.
* A large-scale study of EBL methods gives crucial context for understanding which EBL techniques are useful and which are not. The findings could serve to motivation directions for developing improved EBL algorithms.
Weaknesses: * The primary weakness of this paper is a lack of novelty. It provides a very useful survey of the literature and a much-needed large scale comparison of EBL methods. However, no new techniques are proposed, and the experimental evaluation is somewhat routine.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: * Can the ideas in this paper be used to propose a new EBL algorithm which draws from the useful properties of the best current models?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: Limitations are not discussed,
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We have addressed the novelty factor in the common rebuttal (see above). We would like to emphasize that our asynchronous update method to compute equilibrium states of DCHNs (i.e. to compute minima of the energy function) is novel, and that this technique unlocked significant speedup – indeed, a 13.5x speedup compared to Laborieux et al (2021). We refer to Appendix C in the supplementary material for details.
We have also clarified in the common rebuttal that NEP, NCpL and CCpL are novel, and that these EBL algorithms are tested for the first time in our work. We hope that this answers the reviewer’s question about new EBL algorithms.
We have also explained in the common rebuttal that we will discuss the limitations of our comparative study (limited to DCHNs) in the conclusion section of our manuscript.
Finally, since the reviewer appreciates the usefulness of our large-scale study of EBL methods, we would like to point out that the significant (13.5x) speedup enabled by our novel “asynchronous update method” is also the reason why our extensive comparative study was possible at all. We carried out 135 simulations (5 datasets * 9 algorithms * 3 runs) on five A100 GPUs for about one week, each run (100 epochs) taking between 3 and 5 hours. While not impossible in theory, without this speedup the same study conducted on 5 A100 GPUs would have taken 3 months to complete (instead of one week). Thus, our novel asynchronous update method (one of the essential novelties of our work) played an important role in making this study possible/affordable.
---
Rebuttal Comment 1.1:
Title: Thanks for the responses. I will raise my score.
Comment: Thanks to the authors for their detailed and informative response. I would recommend emphasizing that NEP, NCpL and CCpL are novel contributions of this work more prominently in revisions, but I agree that these are useful and novel variants which yield counter-intuitive insights into EBL learning. I also appreciate that implementation efficiency is a key part of viable machine learning methods and the author's claims about the technical importance of their work seem reasonable. Overall, the responses motivated me to raise my score to 6.
---
Reply to Comment 1.1.1:
Comment: Dear Reviewer dBi8,
Thank you for considering our arguments about the novelty of our work and for raising your score. | Rebuttal 1:
Rebuttal: We thank the reviewers for their time and comments. Several criticisms of the manuscript by the reviewers fall into the following two categories:
1. Lack of novelty
2. The choice of DCHNs for comparing the seven EBL algorithms, and the lack of discussion of the limitations of our study
We address 1 and 2 in this order.
To address 1, we would like to point out the following novelties in our work.
First, while the network architecture that we consider (the deep convolutional Hopfield network, DCHN) is not novel, our energy minimization method for DCHN (the “asynchronous update method”) is a novel contribution to the literature, as opposed to the “synchronous update method” used e.g. in Laborieux et al (2021). Thanks to our asynchronous update method, using reduced precision (16 bits instead of 32 bits) and fewer iterations (60 iterations instead of 250), we get an overall 13.5x speedup compared to Laborieux et al (2021). We refer to Appendix C in the supplementary material for details. Importantly, the code of Laborieux et al (2021) yields the expected performance only with 32 bit precision ; when running their code with 16 bit precision the performance collapses! This observation suggests that switching from their “synchronous update method” to our “asynchronous update method” improves the quality of convergence to equilibrium (the minimum of the energy function), which in turn allows us the use of 16 bits and reduced number of iterations. Thus, our “asynchronous update method” was critical to get the overall 13.5x speedup.
Second, our work introduces novel training algorithms (EBL algorithms), namely NEP, NCpL and CCpL. NEP is new and tested in our work for the first time. Our study revealed the not obvious fact that our NEP algorithm performs much better than the original PEP algorithm introduced by Scellier and Bengio (2017), and often performs as well as the CEP algorithm introduced by Laborieux et al (2021). Similarly, while Coupled Learning was introduced by Stern et al (2021) in the positively-perturbed version (PCpL), our work introduces CCpL and NCpL and tests them for the first time. Our study reveals that both CCpL and NCpL perform better than the original PCpL algorithm.
Additionally, we provide new theoretical results (Theorems 2 and 3) on how these different EBL algorithms optimize objective functions. In particular, we show that N-EP optimizes an upper bound on the cost function, whereas P-EP optimizes a lower bound.
Next, we address 2. The reason we chose DCHNs for comparison in this manuscript is that, among the energy-based architectures compatible with analog hardware (memristive networks), DCHNs are the best performing architectures in the literature. Given this, and the fact that these are some of the largest scale simulations of these learning algorithms to date, we believe that these results will scale to larger datasets and more complex architectures. We will explain in the conclusion section that our experimental findings are limited to DCHNs, that they might not hold beyond these networks, and that this will be left for future work to investigate.
References:
1. Laborieux, A., Ernoult, M., Scellier, B., Bengio, Y., Grollier, J., & Querlioz, D. (2021). Scaling equilibrium propagation to deep convnets by drastically reducing its gradient estimator bias. Frontiers in neuroscience, 15, 633674.
2. Scellier, B., & Bengio, Y. (2017). Equilibrium propagation: Bridging the gap between energy-based models and backpropagation. Frontiers in computational neuroscience, 11, 24.
3. Stern, M., Hexner, D., Rocks, J. W., & Liu, A. J. (2021). Supervised learning in physical networks: From machine learning to learning machines. Physical Review X, 11(2), 021045. | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Statistical Guarantees for Variational Autoencoders using PAC-Bayesian Theory | Accept (spotlight) | Summary: The paper introduces generalization bounds for variational autoencoders (VAEs) by employing PAC-Bayesian bounds. Initially, the authors derive a PAC-Bayesian bound utilizing a posterior distribution (Theorem 3.1). This result is subsequently utilized to establish a generalization bound for the reconstruction loss (Theorem 4.2) and the generated distribution of the VAE (Theorem 5.1). These two outcomes rely on relatively broad quantities, which are further elaborated in corollaries that incorporate additional assumptions, namely bounded instance spaces and manifold assumptions.
Strengths: The paper is well written, presenting a notable and original scientific contribution to the field. The assumptions are thoroughly discussed, and despite the technical nature of the results, the authors make an effort to provide the reader with intuitive explanations, which is commendable.
Weaknesses: Considering the nature of the NeurIPS conference, it would be beneficial to include an experimental section in the paper. Given its strong theoretical content, conducting experiments on synthetic problems could be valuable in assessing the asymptotic behavior of the bound concerning various parameters such as reconstruction losses, $\lambda$, and Lipschitz constants. For example, these synthetic experiments could utilize the assumption of bounded instance space, with a 1D input space, allowing for accurate approximation of the Lipschitz constants of the models.
Aside from this suggestion, as I am not familiar with PAC-Bayes theory or the VAE literature, I am unable to identify any evident flaws or weaknesses in the paper that could aid the authors in improving their work.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: * Does the dimension of the latent space has any direct influence on the bound, or is it present only through e.g. the space's diameter?
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations: The authors mention the limitation of their approach in the conclusion.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their thoughtful review and positive assessment of our work.
> Given its strong theoretical content, conducting experiments on synthetic problems could be valuable in assessing the asymptotic behavior of the bound
Given the technical nature of the results, our focus was on making the presentation as clear and intuitive as possible, without sacrificing mathematical accuracy. We address the question of numerical experiments in our general response.
> Does the dimension of the latent space has any direct influence on the bound, or is it present only through e.g. the space's diameter?
This is a great question. Although it does not explicitly appear in the expressions, the dimension $d\_\\mathcal{Z}$ of the latent space does have an influence on the bounds, because it affects both the reconstruction loss, and the discrepancy between $q\_\\phi(\\mathbf{z} | \\mathbf{x}\_i)$ and $p(\\mathbf{z})$. Indeed, if $d\_\\mathcal{Z}$ is too small, then depending on the complexity of the encoder and decoder networks, the model may not be able to properly reconstruct the samples, which leads to larger empirical and population reconstruction losses. On the other hand, if $d\_\\mathcal{Z}$ is too large, then the KL divergence may get too large as well, and if $q\_\\phi(\\mathbf{z} | \\mathbf{x}\_i)$ and $p(\\mathbf{z})$ are too far apart, then the upper bounds on the Wasserstein distance between $\\mu$ and $g\_\\theta \sharp p(\\mathbf{z})$ becomes larger, because they depend on the Wasserstein-2 distance between $q\_\\phi(\\mathbf{z} | \\mathbf{x}_i)$ and $p(\\mathbf{z})$.
Note that in our bounds leveraging the manifold assumption (Theorems 4.4, 5.3 and 5.4), the intrinsic dimension $d^*$ can be different from the latent dimension $d\_\\mathcal{Z}$. The intrinsic dimension explicitly appears in the bound of Theorem 4.4, but not Theorems 5.3 and 5.4. This is because the upper-bound on the exponential moment is dimension-free, and the bounds of Section 5 do not depend on the average distance.
We thank the reviewer again for their hard work and insightful review.
---
Rebuttal Comment 1.1:
Comment: Thanks to the author for answering my question. I maintain my score. | Summary: This paper derives novel PAC-Bayesian bounds for VAEs by treating the variational posterior as the PAC-Bayes posterior. To do this, the authors must adapt the PAC-Bayes theorem to also hold in the case where the posterior is conditioned on a learning sample.
Strengths: - Well-written paper, with well presented results
- Limitations clearly stated without exaggeration
- Clearly addresses an important theoretical topic
- Novel theoretical results
Weaknesses: - Bound unfortunately needs to be computed using samples different from those used to train the VAE. This runs counter to one of the most interesting and useful aspects of PAC-Bayes bounds - that they can be used as learning objectives.
- The bounds are not numerically computed.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: - Have the authors tried numerically computing these bounds in any situations? How tight are they? How far are they from being non-vacuous?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations: The authors describe the limitations well in their Discussion and Conclusion section, which is well-written and insightful.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their insightful comments and their appreciation of our theoretical contributions.
> Bound unfortunately needs to be computed using samples different from those used to train the VAE. This runs counter to one of the most interesting and useful aspects of PAC-Bayes bounds - that they can be used as learning objectives.
The reviewer is right. As mentioned in the conclusion, the bounds need to be computed using a set of samples disjoint from the training set. We note that since the results in Section 5 provide upper bounds on the Wasserstein distance between distributions on (possibly) high-dimensional spaces, having empirical upper bounds may still be very useful, because of the difficulty of estimating the Wasserstein distance in high dimension in the general case.
> Have the authors tried numerically computing these bounds in any situations? How tight are they? How far are they from being non-vacuous?
We are currently working on experiments on synthetic datasets, and we will add the results to the manuscript. We also address the question of the experiments in our general response.
Once again, we thank the reviewer for their hard work and insightful review.
---
Rebuttal Comment 1.1:
Comment: Thanks for your response. As it seems my understanding of the situation is correct, I will maintain my score. | Summary: In this paper, the authors provide a novel general PAC-Bayesian bound for a posterior distribution conditioned on individual elements of the instance space (and not only on observed samples). They then use it to derive generalization bounds for reconstruction loss, regeneration, and generation in VAE (both for bounded instance space and under the manifold assumption).
Strengths: 1. The results of the paper appear to be original and significant. As the authors claim, this is probably the first work that provides statistical guarantees for VAE.
2. The paper is clearly written, so it is easy to understand.
3. The paper contains a formal theoretical analysis and an intuitive discussion of assumptions and results.
Weaknesses: 1. Perhaps the main weakness is the lack of experimental results. However, I am not sure they are necessary (compare, e.g., with the related work [1]).
[1] Chakrabarty, A. and Das, S. (2021). Statistical regeneration guarantees of the Wasserstein autoencoder with latent space consistency. In Advances in Neural Information Processing Systems.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Minor comments:
l. 46: I would be careful about calling WAE a variant of VAE.
l. 92: $p\ll q$ could be explained.
l. 106: Is "data generating distribution" the same as "input distribution"?
l. 134: footnote character '1' should be before a comma.
l. 170 (bottom): ',' --> '.'.
l. 192: '.' --> ':'.
l. 219: "exists" --> "exist".
l. 289: Is the word "uniform" necessary here?
l. 321, 327: Add ':' at the end of the line.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The authors adequately addressed the limitations of the paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their positive feedback and thoughtful suggestions, which will help us improve the paper.
> Perhaps the main weakness is the lack of experimental results. However, I am not sure they are necessary (compare, e.g., with the related work [1])
Indeed, the main objective of this work is theoretical. [1] present asymptotic bounds on the regeneration properties of WAEs, while our bounds are empirical, and cover the reconstruction, regeneration, and generation properties of VAEs. We also address the question of the experiments in our general response.
> I would be careful about calling WAE a variant of VAE.
We understand how this could be bothersome. We will rewrite that sentence accordingly.
> l. 106: Is "data generating distribution" the same as "input distribution"?
Yes, they are the same thing. Both designations refer to the distribution later denoted $\mu$.
> l. 289: Is the word "uniform" necessary here?
Yes, it is necessary, and we agree that the sentence is a bit ambiguous. Here, the word "uniform" refers to the choice of $ i \\in \\{1, \\dots, n \\}$, such that the distribution $q\_\\phi(\\mathbf{z} | \\mathbf{x}\_{i} )$ is used to sample $\\mathbf{z} \\sim q\_\\phi(\\mathbf{z} | \\mathbf{x}\_i)$.
In other words, it is not $\\mathbf{z}$ that is sampled "uniformly", but $ i \\in \\{ 1, \\dots, n \\} $ that is sampled uniformly, since all the coefficients in the empirical regenerated distribution are equal to $\\frac{1}{n}$. We will reformulate the sentence to eliminate the ambiguity.
Once again, we thank the reviewer for their hard work. We are grateful for the reviewer's suggestions, and will use them to improve the manuscript.
---
Rebuttal Comment 1.1:
Title: Thank you for the response
Comment: I am satisfied with the authors' rebuttal. I am willing to raise my rating depending on the conclusions of the second phase of the discussion. | Summary: 【Post-rebuttal Comments】
I thank the authors for the discussions after the authors' rebuttal. My questions about variance estimation are appropriately answered. So, I want to keep my score and vote for acceptance.
【Original Comments】
This paper derives the PAC-Bayes bound on the hypothesis set by conditional distributions (Theorem 3.1). As an example of its application, this paper gives three kinds of generalization bounds for Variational AutoEncoder by interpreting its encoder as a hypothesis set. Specifically, this paper provides guarantees for the reconstruction of data points (Theorem 4.3), the regeneration of data distributions (Theorem 5.1), and the generation from prior distributions (Theorem 5.2). Furthermore, by assuming the manifold hypothesis for the input distribution, sharper bounds are derived that depend on the manifold's dimension rather than the input dimension (Theorem 4.4, Theorem 5.3, Theorem 5.4).
Strengths: - Instead of simply applying the existing PAC-Bayes bound, this paper derives PAC-Bayes bounds for the posterior distribution conditioned on samples drawn from the data distribution. They are novel from the perspective of statistical learning theory, verified by the comparison with existing work on the PAC-Bayes bound for conditional distributions (Rivasplata et al. (2020)) and generalization bound for VAE (Chakrabarty and Das, 2021 and Cherief-Abdellatif et al., 2022).
- Many generalization bounds for VAE are systematically derived from a single PAC-Bayes bound, showing its generality.
- The paper is well-written. Both the organization and mathematical descriptions of the paper are appropriate. I had no significant difficulties in reading the paper.
Weaknesses: - The obtained bounds are not uniform with respect to decoder parameters. This restriction affects the sample size rate of the bounds: If I understand correctly, the O(1/n) terms in the upper bounds come from the fact that the decoder is a single hypothesis so that the (non-uniform) Hoeffding bound can be applied.
- The decoder outputs mean parameters only, and the variance of the distribution modeled by the decoder is fixed. This architecture is different from the one we often use practically.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: By applying uniform concentration inequality with respect to the decoder parameters (more specifically, the family of loss functions parametrized by the decoder), can we give uniform bound with respect to the decoder parameters at the cost of worsening the sample size rate?
l.134: iid -> i.i.d.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: This paper discusses limitations in Section 6: (1) the parameters of the decoder are fixed in the derived bounds, and (2) the L1 loss is used as the reconstruction loss instead of the commonly used L2 loss (this is equivalent to modeling the decoder as the Laplace distribution instead of the Gaussian distribution).
Another limitation is that the VAEs used in this analysis have a fixed decoder variance (l.116). If I do not miss any information, it is not discussed whether we can extend the obtained bounds to VAEs that also estimate variance.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: First, we thank the reviewer for their thoughtful review and insightful comments.
> By applying uniform concentration inequality with respect to the decoder parameters (more specifically, the family of loss functions parametrized by the decoder), can we give uniform bound with respect to the decoder parameters at the cost of worsening the sample size rate?
It is possible to obtain a uniform bound with respect to the decoder's parameters $\\theta \\in \\Theta$, if we assume $\\Theta$ is finite. In this case, the union bound leads to a penalty of $\\log \\lvert \\Theta \\rvert$. The problem with this is that since $\\Theta$ is a set of neural network parameters, this assumption may not be accurate unless $\\lvert \\Theta \\rvert$ is very large, which may significantly worsen the bound.
Regarding uniform concentration inequalities, we believe their usage would require some assumption on the complexity of $\\Theta$, and we do not believe the problem to be straightforward. We agree with the reviewer that obtaining uniform bounds w.r.t. $\\theta$ is important, and we plan on exploring that in future works.
We also mention that the non-uniformity w.r.t. $\\theta$ is due to the fact that in general, PAC-Bayes considers a single loss function, and in this case, the loss function depends on the decoder's parameters.
> the VAEs used in this analysis have a fixed decoder variance (l.116). If I do not miss any information, it is not discussed whether we can extend the obtained bounds to VAEs that also estimate variance.
Indeed, the variance of the decoder is fixed, and there is a way to extend the results to optimize the variance as well. Assuming the standard deviation $\\sigma$ is constant, our bounds yield $\\sigma \\propto \\frac{n}{\\lambda}$ (because in our expressions, the sum of KL divergences is only divided by $\\lambda$, whereas it is usually divided by $n$ as well). Hence, optimizing the decoder's variance $\\sigma^2$ is equivalent to optimizing the hyperparameter $\\lambda$, which can be done for PAC-Bayes bounds, but at a cost.
Most PAC-Bayes bounds (including ours) do not directly allow one to optimize $\\lambda$ (see Section 2.1.4 of Alquier, 2021 and references therein). And, although there are some ways around this restriction, we are not aware of any results that allow one to optimize $\\lambda$ in the general case (meaning continuous $\\lambda$ and unbounded loss). If the loss function is $[0, 1]$-bounded, [1] developed a PAC-Bayes bound uniformly valid for a trade-off parameter $\\lambda' \\in (0, 2)$. For unbounded losses, if one assumes $\\lambda \\in \\Lambda$, where $\\Lambda$ is finite, a union bound argument allows one to make the bound uniform with respect to $\\lambda \\in \\Lambda$, at the cost of $\\log \\lvert \\Lambda \\rvert$. One can still optimize with respect to a continuous set $\\Lambda$, by replacing $\\lambda$ with $\\lfloor \\lambda \\rfloor$. So, for instance, if $\\Lambda = [1, n]$, and we replace $\\lambda$ with $\\lfloor \\lambda \\rfloor$, the penalty is $\\log n$.
In summary, we can use the union bound to extend our results to VAEs that estimate the variance of the likelihood. This extension comes with a penalty depending on the size of the chosen space. We will mention this in the main paper, and add some formal details in the supplementary material.
[1] A strongly quasiconvex PAC-Bayesian bound. Thiemann, N. and Igel, C. and Wintenberger, O. and Seldin, Y.; International Conference on Algorithmic Learning Theory 2017
Once again, we thank the reviewer for their hard work and interesting questions. We hope that the reviewer finds our answers satisfactory, and we will happily answer any further questions.
---
Rebuttal Comment 1.1:
Comment: Thank you for your response. I answer each question in the following responses.
**Uniformity w.r.t. decoder parameters**
Thank you, I understand and agree with the authors' comments.
**Estimation of Variances**
I appreciate the authors' explanations. However, it seems what the authors assumed differently from what I have in mind.
The authors appeared to discuss extending the theory to the situation where we estimate the variance parameter $\sigma$ **independent of the instance** $\boldsymbol{x}$. That is, the loss function (6) is changed to $\sigma^{\theta}\_{rec}(\boldsymbol{z}, \boldsymbol{x}) = \sigma^{-1}\|g\_\theta(\boldsymbol{z}) - \boldsymbol{x}\|$ (correct me if I was wrong.) On the other hand, I intended the case where the variance parameter depends on the instance $\boldsymbol{x}$: The output of the decoder is $(g\_\theta(\boldsymbol{z}), \sigma_\theta(\boldsymbol{z}))$, and the loss function is (for example)
$\ell^{\theta}\_{rec}(\boldsymbol{z}, \boldsymbol{x}) = \sigma\_\theta(\boldsymbol{z})^{-1}\|g\_{\theta}(\boldsymbol{z}) - \boldsymbol{x}\|$.
I am sorry for my lack of explanation.
---
Reply to Comment 1.1.1:
Comment: We thank the reviewer for the clarification. Indeed, we discussed the case when the variance is learned from the training set, but is independent of individual instances $\\mathbf{x}$. Let us define the loss function, as the reviewer suggested, as $\\ell\_{\\text{rec}} =\\frac{1}{\\sigma\_\\theta(\\mathbf{z})} \\lVert \\mathbf{x} - g\_\\theta(\\mathbf{z}) \\rVert$. First, because of the division by $\\sigma\_\\theta(\\mathbf{z})$, we assume there is $\\sigma\_1 >0$ such that for any $\\mathbf{z} \\in \\mathcal{Z}$, $\\sigma\_\\theta(\\mathbf{z}) \\geq \\sigma\_1$. There are two main problems: making sure Assumption 1 is satisfied, and bounding the exponential moment of Theorem 3.1.
The first problem is equivalent to showing that Proposition 4.1 can be extended to this loss function. Following the second part of the proof of Proposition 4.1 (line 80 in the supplementary material), we need to show that $\\ell\_{\\text{rec}}$ is Lipschitz-continuous. The problem here is that in general, the product of real-valued Lipschitz functions is not Lipschitz. Hence, even assuming that $\\sigma\_\\theta$ is $K\_\\sigma$ Lipschitz (for some $K\_\\sigma > 0$), it does not seem possible to achieve this without additional assumptions. If we assume, in addition, that $\\lVert \\mathbf{x} - g\_\\theta(\\mathbf{z}) \\rVert \\leq M$ is bounded, then we obtain
\$ \\ell\_{\\text{rec}}(\\mathbf{z\_1, \\mathbf{x}}) - \\ell\_{\\text{rec}}(\\mathbf{z\_2, \\mathbf{x}}) \\leq
\\left( \\frac{K\_\\sigma M}{\\sigma\_1^2} + \\frac{K\_\\theta}{\\sigma\_1} \\right) \\lVert \\mathbf{z}\_2 - \\mathbf{z}\_2 \rVert \$
which implies that Assumption 1 is satisfied with the constant $K\_\\phi \\left( \\frac{K\_\\sigma M}{\\sigma\_1^2} + \\frac{K\_\\theta}{\\sigma\_1} \\right)$, instead of $K\_\\phi K\_\\theta$, and with the family $\\mathcal{E}$ being the set of functions from $\\mathcal{Z}$ to $\\mathbb{R}$ with Lipschitz norm at most $\\left( \\frac{K\_\\sigma M}{\\sigma\_1^2} + \\frac{K\_\\theta}{\\sigma\_1} \\right)$.
In this case, when the instance space is bounded, the upper bound on the exponential moment (in the proof of Theorem 4.3) is:
\$
\\frac{\\lambda^2 \\Delta^2}{8n \\sigma\_1^2}, \\quad \\text{ instead of } \\quad \\frac{\\lambda^2 \\Delta^2}{8n}.
\$
And under the manifold assumption, we get the following upper bound on the exponential moment (in the proof of Theorem 4.4):
\$
\\frac{\\lambda^2 K\_*^2}{2n \\sigma\_1^2}, \\quad \\text{ instead of } \\quad \\frac{\\lambda^2 K\_*^{2}}{2n}.
\$
Note that although the upper bounds on the average distance remain unchanged, the coefficient $K\_\\phi K\_\\theta$ is replaced by $K\_\\phi \\left( \\frac{K\_\\sigma M}{\\sigma\_1^2} + \\frac{K\_\\theta}{\\sigma\_1} \\right)$, which can be larger, specially if $\\sigma\_1$ is very small.
So in summary, it is possible to extend our results to make the variance of the likelihood dependent on individual samples, but it comes with additional assumptions, making Theorem 4.2 less general. Although it may be possible to improve the results we presented above, perhaps by making different assumptions on the function $\\sigma\_\\theta$.
As for the results of Section 5, our approach almost "forces" one to consider the loss function we defined in the paper. Indeed, since by default the VAE's generative model only considers the mean $g\_\\theta(\\mathbf{z})$ of the distribution induced by $\\mathbf{z}$ in $\\mathcal{X}$, the approach we took, (Lemma D1) leads directly to lemma B1, which considers a decoder with constant variance. Therefore, although the upper bounds in Section 5 can be affected by $\\sigma\_\\theta$, because of its effect on the empirical losses, the results of Section 5 remain the same. That is, unless one considers a generative model different from the usual $g\_\\theta \\sharp p(\\mathbf{z})$.
We hope this answers the reviewer's question, and we are happy to answer any further questions. We sincerely thank the reviewer for raising interesting questions. | Rebuttal 1:
Rebuttal: We are extremely grateful to all the reviewers for taking the time to read our work and make thoughtful comments and suggestions.
All the reviewers seem to agree that the subject of this work (extending PAC-Bayes theory to conditional posteriors and deriving statistical guarantees for VAEs) is important, and the theoretical results are novel and significant. The reviewers also seem to agree that the paper is well-written and the assumptions and results are clearly presented.
Some of the reviewers inquired about the numerical behavior of our bounds. We agree that this is an interesting question, and we are working on the implementation and some experiments on toy datasets. We will add the results to the final version of the manuscript, and the code will be publicly available. Nevertheless, we emphasize that our primary objective was to develop novel theoretical results and present them with as much clarity as possible. Given the importance of VAEs in the Machine Learning community and the lack of statistical guarantees, our goal was to establish a new theoretical framework for the analysis of the statistical properties of VAEs. We hope our work will foster new ideas and insights in the community.
Once again, we thank the reviewers for their hard work, and positive assessment of our manuscript. | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Adaptive Online Replanning with Diffusion Models | Accept (poster) | Summary: The authors address the online replanning problem within diffusion-based models by introducing their method, RDM, which uses the likelihood of trajectories to decides when and how to replan.
Strengths: - The topic of when and how to replan is an important one, particularly within the diffusion model community where it has not, to the best of my knowledge, been addressed before.
- The methodology section appears to be sound, and the idea of choosing to replan and what type of replanning should happen based on the trajectory likelihoods is compelling.
- The paper is clear, well-written and easy to follow.
Weaknesses: - The experimental results are lacking in terms of comparison with other simple baselines that are able to replan. While the authors partially address this in the ablation study of Section 4.4, it is unclear why these are also not used in the main experiments. Given the aim of the paper is to provide a good replanning strategy, I do not believe this should be an ablation only, but rather that most results should include “no replanning baselines” (i.e., the ones already provided), and “replanning baselines” as proposed in the ablation. This would allow a reader to take conclusions regarding the actual improvement from performing no replanning to a basic strategy to the proposed method.
- The absence of replanning in the baselines could justify the significant drop of performance in the results in Section 4.1, 4.2 and 4.3. A comparison with replanning baselines (both fully and, for example, with a randomly selected number of steps) strengthens the case that the “when” and “how” to replan are key to the success of the results.
- The paper is also lacking an ablation with a variable number of replanning steps. In Section 4.4 the authors compare DD which replans “using two fixed intervals”, but how does the performance change for a higher replanning frequency?
**Minor comments**:
- Typo in abstract, line 1, should be “have risen as a promising”
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: - How does this method compare to a fixed, fully replanning strategy at different intervals in the main scenarios compared in the experiments?
- How are the thresholds in Algorithm 1 tuned, and how does the method perform for different $l_1$/$l_2$ values?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: The authors address some limitations of their method in Section 6 of the paper, and there are no concerns regarding potential negative societal impact for this work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear Reviewer,
We thank the reviewer for the reviews and insightful suggestions.
**Q1. Comparison.**
> The experimental results are lacking in terms of comparison with other simple baselines that are able to replan.
We show the results of the comparison between our RDM algorithm and the replanning-based baseline algorithms (SDM and PRDM) across more tasks. We run experiments using different intervals or thresholds and measure their computational costs by the total number of diffusion steps. In the table below, we choose one task from each of the three domains and report the total diffusion steps and the performance, and show more evaluation results in Figure I-V in the rebuttal PDF. Notably, RDM consistently outperforms all the baseline algorithms under the same total diffusion steps.
Table A. The total diffusion steps and performance.
|Environment | Method | Total Diffusion Steps | Normalized Returns |
| ----------------------------------------- | -------- | --------------------- | ------------------ |
| Maze2D Large | Diffuser | 2304.0 | 165.9 (±3.8) |
| | SDM | 1925.1 | 175.0 (±5.3) |
| | PRDM | 1916.4 | 169.3 (±1.4) |
| | RDM | 1894.38 | 185.4 (±3.0) |
| Hopper Medium-Expert | DD | 17100 | 47.4 (±3.6) |
| | SDM | 18500 | 54.4 (±3.5) |
| | PRDM | 16200 | 57 (±4.3) |
| | RDM | 15900 | 59.7 (±3.8) |
| Close Box | DD | 1968 | 46.0 (±7.5) |
| | SDM | 1813 | 52.4 (±6.6) |
| | PRDM | 1653 | 50 (±7.2) |
| | RDM | 1600 | 63.5 (±7.0) |
**Q2. Replan interval.**
> How does the performance change for a higher replanning frequency?
We investigate different intervals $I$ for replanning for Diffuser and Decision-Diffuser. The results are shown in Figure VI and VII. We observe that as the interval decreases (that is replanning is done more frequently), the performance improves as we expect. However, when the interval is smaller than a certain value, the performance decreases significantly. For example, in the Maze2D Large domain shown in Figure VI, the return increases when the interval decreases from $250$ to $100$, while it drops when the interval decreases from $100$ to $1$. The ablation results confirm our statement that replanning with an interval that is too small (for example, replanning at every time step) may prevent successful task execution.
**Q3. Replan threshold.**
> How are the thresholds in Algorithm 1 tuned, and how does the method perform for different $l_1$/$l_2$ values?
We analyze the impact of different thresholds $l_s$ for RDM. The results are shown in Figure X-XI in the rebuttal PDF. As we expect, when $l$ decreases (that is when replanning is done more frequently), the performance has improved. We choose the best result for different $l_s/l_f$ under the same total diffusion steps.
**Q4. Minor comments.**
> Typo in abstract, line 1, should be “have risen as a promising”.
Thanks for pointing this out. We will address them in the revision.
---
Rebuttal Comment 1.1:
Title: Response to Rebuttal
Comment: Thank you for the detailed rebuttal.
**On the comparison**: I thank the authors for the extra results, as they contextualize the performance improvements of RDM. I believe include this in the paper will make the case for the method stronger.
**On the replan interval**: the failing to plan with a high replanning frequency is an interesting observation. These results show that a simple fixed time replanning strategy significantly underperforms RDM, which again makes the case for the method stronger.
**On the replan threshold for RDM and SDM**: these results are expected, but also showcase the interesting monotonic behavior of the returns based on the threshold. The differences between these returns and the ones obtained by simply reducing the replanning interval also showcases the non-trivial nature of the method.
The authors have addressed all the concerns I had regarding the paper, so I am updating the score accordingly.
---
Reply to Comment 1.1.1:
Comment: Dear reviewer XopQ,
Thank you for your comments on our response. We are happy that your concerns have been addressed. We still have a few days left in the discussion period. If you have any further questions or if there is anything else we can provide to show the merits of our work, please don't hesitate to let us know. Thank you!
Best,
Authors | Summary: The submission describes a method for deciding when and how to "replan" when using diffusion models for inferring plans, as in Decision Diffuser [1]. The task is essentially imitation learning (IL): given a training dataset of plans and features, the task is to predict a new plan given novel features. As in [1], the IL method consists of training a diffusion model (DM) to predict state sequences given contexts, and the features happen to include the reward of the plan. A high-reward plan can therefore be inferred by conditioning the model on a high reward value.
The replanning aspect of this work is useful because running the diffusion model may be expensive, making it difficult to use diffusion-model-based planning to do receding-horizon control. Replanning only when necessary saves computation compared to planning every cycle.
Strengths: Originality:
The ideas of trying to infer when and how to replan when planning with diffusion models seem novel. I'm not aware of other work on inferring when to replan in general, so that general idea may be novel as well. A nice feature of applying probabilistic methods to planning is that it becomes quite natural to ask such questions.
Quality:
There are some promising aspects to the experiments. I appreciated that the experiments were targeted towards answering specific questions that were clearly articulated in lines 182-185. The robotic control (RLBench) results were also promising in that the success rate was significantly higher than that of a baseline without replanning.
Significance:
The results show that simply replanning based on some simple heuristics is a viable technique for boosting the performance of diffusion-based planning methods. This is a potentially useful and interesting observation for anyone interested in applying diffusion models to solving planning problems in practice.
Weaknesses: Quality:
I believe the weakest part of the submission is that the experiments are focused mainly on "apples-to-oranges" comparisons of RDM to model-free RL and decision diffusion models without replanning. None of the baselines have the ability to respond to new information at test time, as far as I can tell. In the case of the model-free RL algorithms, this is a bit unclear—theoretically, the learned policies could condition on "real-time" information, but it's unclear whether this is the case. Still, even if the RL policies do condition on novel information, comparing a policy learned offline to a planner that can be evaluated online, seems a bit unfair.
It would make more sense to me if the experiments were focused more on comparisons between RDM and other replanning methods. Since the goal of intelligent replanning is to reduce computation with respect to replanning every cycle, I think it would also be fair to evaluate the net effect of different replanning strategies on the tradeoff between solution time and solution quality. For example, if we replan at fixed intervals, but evaluate that for a range of intervals, what trade-off do we see between solution time and quality compared to RDM? If we replan every cycle, is the solution quality much higher than RDM?
Although the ablation study in section 4.4 does analyze the effects of different replanning strategies, this analysis feels unsatisfying for a few reasons. First, it is unclear how the threshold / interval for replanning were chosen for DD and SDM, which simply replan based on time or state distance thresholds. Ideally, these thresholds would be chosen to maximize some metric evaluated on a validation set, but I could find no details about this.
I also note that the baseline for figure 7 is not zero, which gives a very misleading impression about the relative performance gap between (e.g.) SDM (replan triggered by state distance) and RDM, which is actually pretty small. There are also no error bars on this plot, so it is hard to tell whether this result is significant. This experiment was also only run on a single task (hopper-medium-expert)—the results would be much more interesting if these simple baselines (DD, SDM) were run on the entire suite of tasks. If it turns out that SDM performs almost as well as RDM on most tasks, then that would make RDM significantly less appealing.
Clarity:
The problem statement is unclear. On my initial read-through of the paper, I was under the impression that RDM (the proposed method) was an imitation learning (IL) method—until I reached the experiments, which compared RDM to RL methods. I then backtracked to Section 2.1 (problem setting), which states:
"We consider a trajectory optimization problem similar to Diffuser[17]... the objective of the trajectory optimization problem is to find a sequence of actions … that maximizes J… which is the expected sum of rewards over all timesteps."
After reading the reference for Diffuser[17], this paragraph convinced me that my original belief was wrong, and that RDM is in fact an (online) RL method that uses a diffusion model as a planner—just like Diffuser[17]. I then at some point encountered a Decision Diffuser[1] reference in the paper and decided to revisit that reference to refresh my memory. I was very confused because the paper was not what I remembered it being, and in fact Decision Diffuser[1] is an IL method.
I eventually realized that Diffuser[17] and Decision Diffuser[1] are different methods: the first is an RL method, and the second is an IL method. However, they both share common authors, use similar methods, and have similar names. I then re-read lines 89-91, ("we follow the formulation of planning in Decision Diffuser[1]…") which convinced me that my original impression was correct: RDM is an IL method, not an RL method.
Despite the unfortunately similar names, I believe the real root of my confusion is that the paper lacks a crisp problem statement—the reader should not have to guess as to whether the problem addressed is IL, or offline/online off-policy/on-policy RL. It's unfortunate that Section 2.1 (Problem Setting) introduces the method in RL terms and explicitly states that the problem is similar to that of Diffuser[17], which is an RL method. Instead, that paragraph should probably state something like "Our work addresses a problem similar to that described in Decision Diffuser[1]: given a dataset of training state trajectories and rewards, our task is to predict a state trajectory similar to what was observed in the training dataset, while conditioning on a query reward value. This is essentially an Imitation Learning problem where we observe and condition on rewards, but it is also similar to RL, in the sense that the goal is to produce a trajectory with high reward."
The comparison to RL methods also confuses things—the submission should be clearer about why RDM is compared to RL methods. It would also be beneficial to call Diffuser by a different name (e.g., Planning Diffuser) to disambiguate it from Decision Diffuser.
The exposition of the method could also be significantly improved. The explanation of when to replan in section 3.1 is a bit too verbose and doesn't convey the basic idea well, which is simply to evaluate the likelihood of a plan where states/actions at previous timesteps are replaced with those observed during plan execution. A simple figure would be very helpful here.
Significance:
One factor that may limit the significance of this work is that the topic is relatively niche—it attempts to solve a particular problem (performance of replanning) with applying a particular planning method (Decision Diffuser), which itself is still relatively immature. Solving niche problems is ok, but the potential for significant impact would be greater if there were some interesting take-away for a more general audience.
Technical Quality: 2 fair
Clarity: 1 poor
Questions for Authors: How were the intervals / state distance thresholds set for the DD and SDM baselines in figure 7?
Have you tried evaluating the trade-off between solution quality and planning time for different replanning strategies (e.g., for different possible threshold values in DD and SDM)?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 1 poor
Contribution: 2 fair
Limitations: The paper adequately addresses the limitations of this work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear Reviewer,
We thank the reviewer for the detailed comments and questions.
**Q1. Comparison.**
> It would make more sense to me if the experiments were focused more on comparisons between RDM and other replanning methods.
> What trade-off do we see between solution time and quality compared to RDM?
We show the results of the comparison between our RDM algorithm and the replanning-based baseline algorithms across more tasks. We run experiments using different intervals or thresholds and measure their computational time by the total number of diffusion steps. In the table below, we report the total diffusion steps and the performance and show more evaluation results in Figure I-V in the rebuttal PDF. Notably, RDM consistently outperforms all the baseline algorithms under the same total diffusion steps.
Table A. The total diffusion steps and performance.
|Environment|Method|Total Diffusion Steps|Normalized Returns|
|-----|-----|-----|-----|
|Maze2D Large|Diffuser|2304.0|165.9 (±3.8)|
| |SDM|1925.1|175.0 (±5.3)|
| |PRDM|1916.4|169.3 (±1.4)|
| |RDM|1894.38|185.4 (±3.0)|
|Hopper Medium-Expert|DD|17100|47.4 (±3.6)|
| |SDM|18500|54.4 (±3.5)|
| |PRDM|16200|57 (±4.3)|
| |RDM|15900|59.7 (±3.8)|
|Close Box|DD|1968|46.0 (±7.5)|
| |SDM|1813|52.4 (±6.6)|
| |PRDM|1653|50 (±7.2)|
| |RDM|1600|63.5 (±7.0)|
**Q2. Different replanning strategies.**
> How the threshold/interval for replanning were chosen for DD and SDM.
> If we replan every cycle, is the solution quality much higher than RDM?
We investigate different intervals $I$ for replanning for Diffuser and Decision-Diffuser. The results are shown in Figure VI and VII. We observe that as the interval decreases (that is replanning is done more frequently), the performance improves as we expect. However, when the interval is smaller than a certain value, the performance decreases significantly. For example, in the Maze2D Large domain shown in Figure VI, the return drops when the interval decreases from $100$ to $1$. The ablation results confirm our statement that replanning with an interval that is too small (for example, replanning at every time step) may prevent successful task execution.
We also analyze the impact of different thresholds $l$ for the baseline algorithms, SDM and RDM. The results are shown in Figure VIII-XI. As we expect, when $l$ decreases (that is, when replanning is done more frequently), the performance has improved. Despite the performance differences of the baseline algorithms, we want to emphasize again that as shown in Figure I-V, under the same computational budget, our RDM algorithm outperforms all the baseline algorithms.
**Q3. Clarity**
> The real root of my confusion is that the paper lacks a crisp problem statement.
> The comparison to RL methods also confuses things.
We apologize that our problem statement might be confusing, we would like to clarify that Diffuser [1], Decision-Diffuser [2], and our work can be seen as following the same setting, which is offline reinforcement learning (offline RL) and generate plans conditioned on maximizing reward (where for Long-Horizon and Robotic Control tasks we use the reward function that corresponds to either reaching the conditioned goal or solving the robotics task respectively). To solve this offline reinforcement learning task, Diffuser, Decision-Diffuser and our work sample a trajectory of states (a plan) that maximizes the conditioned reward, where Diffuser samples from the composition of a diffusion model and value function, while Decision Diffuser and our work sampling from conditional diffusion model conditioned on the desired reward function. This plan is then transformed into a policy.
Since the setting we consider can be considered an offline RL setting, our other baselines are then naturally offline RL algorithms. It's important to note that Decision-Diffuser is not a behavioral cloning method, as in the Mujoco settings, it does not fit a model to all trajectories in a dataset, but rather learns a reward-conditioned trajectory model (similar to Decision Transformer) in order to construct trajectories that maximize reward in an environment.
Our approach towards replanning can be applied to any Diffusion model that synthesizes a trajectory of actions to optimize a reward function and can be applied to either Diffuser or Decision-Diffuser.
We will revise our problem statement in our revision to make this clearer, please let us know if you have any additional questions.
[1] M. Janner, Y. Du, J. B. Tenenbaum, and S. Levine. Planning with diffusion for flexible behavior synthesis. arXiv preprint arXiv:2205.09991, 2022.
[2] A. Ajay, Y. Du, A. Gupta, J. Tenenbaum, T. Jaakkola, and P. Agrawal. Is conditional generative modeling all you need for decision-making? arXiv preprint arXiv:2211.15657, 2022.
**Q4. Significance.**
> It attempts to solve a particular problem (performance of replanning) with applying a particular planning method (Decision Diffuser)
Thanks for the comment. Our replanning strategy, based off likelihood, can be applied to any diffusion-based planner, which has seen a variety of different applications across different planning settings such as robotics policies [4] and video [5]. Broadly, we believe the idea of likelihood-based replanning can also be applied to many other likelihood-based planning methods (e.g. trajectory transformer[3]).
[3] Michael Janner, Qiyang Li, and Sergey Levine. Reinforcement learning as one big sequence modeling problem. arXiv preprint arXiv:2106.02039, 2021.
[4] Cheng Chi, Siyuan Feng, Yilun Du, Zhenjia Xu, Eric Cousineau, Benjamin Burchfiel, Shuran Song. Diffusion Policy: Visuomotor Policy Learning via Action Diffusion. arXiv preprint arXiv:2303.04137, 2023.
[5] Yilun Du, Mengjiao Yang, Bo Dai, Hanjun Dai, Ofir Nachum, Joshua B. Tenenbaum, Dale Schuurmans, Pieter Abbeel. Learning Universal Policies through Text-Conditioned Video Generation. arXiv preprint arXiv:2302.00111, 2023.
---
Rebuttal Comment 1.1:
Comment: Dear reviewer bzBf,
Thank you again for your comments and suggestions on our paper. We hope that our responses and new results have addressed your questions and concerns. We still have a few days left in the discussion period. If you have any further questions, please don't hesitate to let us know and we'll be happy to address them. Thank you!
Best,
Authors | Summary: The manuscript introduces a technique for enhance the motion planner rooted in diffusion models, encompassing decisions concerning the timing of replanning and plan trajectories upon the existing path. The strategy for timing replanning utilizes the inherent estimated likelihood of the trajectory in diffusion models as a criterion for replanning. In terms of forming new trajectories, the authors either completely reconstruct the plan or modify it based on future contexts determined by specific guidelines. The concept is clear-cut, and the outcomes appear to be quite promising.
Strengths: Replanning is important for robots to execute trajectories robustly, deciding when to replan and replanning algorithm are the core challenges of replanning problem. Tackling these problems can help a lot with diffusion based robot motion planning methods. Moreover, the overall writing quality is good, and the core idea is presented well.
Weaknesses: It's not surprise that the performance gets better by adding replanning strategy into diffusion model. For long horizon planning problem in Section 4.1, the authors compared performance of RDM and baselines, however, none of the baselines contains replanning strategy, making the effectiveness of replanning timing decision method proposed by the author hard to evaluate. This also happens to Robotic Control tasks in Section 4.3.
As for stochastic environment in Section 4.2, the author demonstrates that RDM outperforms baseline models in environment with stochastic transition models. How different levels of stochasticity will affect the performance is later proved in Section 4.4. However, only one environment is tested, and the performance of RDM and other baselines are relatively close, which is inadequate to evaluate the effectiveness of replanning strategy in RDM.
In Section 4.4, the author compares the performance for models using different replanning strategy, and comparison of different fixed interval/different threshold of state distance deviation is missing, which makes the performance of DD/SDM not convincing enough.
There are also some points which are not demonstrated very well:
- The detailed implementation of how the replanned trajectory is generated is missing.
- The theoretical proof of the effectiveness of likelihood function for partially-executed trajectory is missing, and the threshold of likelihood to choose how to replan is selected empirically which may significantly impact the performance.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: - For long horizon planning tasks and robotic control tasks, is using likelihood better than using state distance deviation as replanning criteria?
- Comparing baselines with different replanning timing decision methods and RDM, how much efficiency improvement will RDM achieve?
- For stochastic environment, the advantage of methods implementing replanning strategy comparing to non-replan methods should be increased when stochasticity arises, but the performance gets closer shown on Figure 6. Why?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: The author addressed the limitations well.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear Reviewer,
We thank the reviewer for the detailed reviews and insightful suggestions.
**Q1. Replanning baselines.**
> For long horizon planning tasks and robotic control tasks, is using likelihood better than using state distance deviation as replanning criteria?
>
> Comparing baselines with different replanning timing decision methods and RDM, how much efficiency improvement will RDM achieve?
We show the results of the comparison between our RDM algorithm and the replanning-based baseline algorithms (SDM and PRDM) across more tasks. We run experiments using different intervals or thresholds and measure their computational costs by the total number of diffusion steps. In the table below, we choose one task from each of the three domains and report the total diffusion steps and the performance, and show more evaluation results in Figure I-V in the rebuttal PDF. Notably, RDM consistently outperforms all the baseline algorithms under the same total diffusion steps.
Table A. The total diffusion steps and performance.
|Environment | Method | Total Diffusion Steps | Normalized Returns |
| ----------------------------------------- | -------- | --------------------- | ------------------ |
| Maze2D Large | Diffuser | 2304.0 | 165.9 (±3.8) |
| | SDM | 1925.1 | 175.0 (±5.3) |
| | PRDM | 1916.4 | 169.3 (±1.4) |
| | RDM | 1894.38 | 185.4 (±3.0) |
| Hopper Medium-Expert | DD | 17100 | 47.4 (±3.6) |
| | SDM | 18500 | 54.4 (±3.5) |
| | PRDM | 16200 | 57 (±4.3) |
| | RDM | 15900 | 59.7 (±3.8) |
| Close Box | DD | 1968 | 46.0 (±7.5) |
| | SDM | 1813 | 52.4 (±6.6) |
| | PRDM | 1653 | 50 (±7.2) |
| | RDM | 1600 | 63.5 (±7.0) |
**Q2. Stochasticity levels.**
> However, only one environment is tested, and the performance of RDM and other baselines are relatively close, which is inadequate to evaluate the effectiveness of replanning strategy in RDM.
We show the comparative results about different levels of stochasticity in Figure XII in our rebuttal PDF. The reason why all planning-based methods do not perform very well is that randomness will sometimes make the agent reach the out-of-distribution states, which leads to the performance drop.
**Q3. Unclear points.**
> The detailed implementation of how the replanned trajectory is generated is missing.
We have presented how to replan in Section 3.2 and provided the pseudocode in Algorithm 3. It requires a partially executed plan, $\tilde{\tau}$, as input. It adds noises to it by running the forward process of the diffusion model for $N_f$ steps (where $N_f$ is a pre-defined parameter), and then denoise the trajectory by running the denoising step of the diffusion model for $N_f$ steps. We will make the implementation clearer in the revision.
> The theoretical proof of the effectiveness of likelihood function for partially-executed trajectory is missing, and the threshold of likelihood to choose how to replan is selected empirically which may significantly impact the performance.
We investigate the different thresholds in Figure X and XI and empirically show that our model performs better given the same total diffusion steps.
Also note that our paper is focused on *empirically* validating the effectiveness of using the likelihood function to determine when to replan. We do not aim to claim that this method is theoretically-optimal or the return of the replaned trajectories has any theoretical guarantees, but do note that replanning based off likelihood is principled as it occurs precisely when states / plans fall outside the distribution learned by the original trajectory model.
**Q4. Stochastic environment.**
> For stochastic environment, the advantage of methods implementing replanning strategy comparing to non-replan methods should be increased when stochasticity arises, but the performance gets closer shown on Figure 6. Why?
The reason is that randomness will sometimes cause the agent to reach the out-of-distribution states, which leads to the drop of the performance of all offline RL methods. Our method RDM consistently outperforms other baselines under different levels of stochasticity. However, at high stochasticity levels, all methods will tend to fail.
---
Rebuttal Comment 1.1:
Comment: Dear reviewer rt81,
Thank you again for your comments and suggestions on our paper. We hope that our responses and new results have addressed your questions and concerns. We still have a few days left in the discussion period. If you have any further questions, please don't hesitate to let us know and we'll be happy to address them. Thank you!
Best,
Authors
---
Rebuttal Comment 1.2:
Title: Rebuttle Response
Comment: Thank you for taking the time to provide a thorough explanation. It's evident that the RDM replanning method holds an advantage over more basic replanning techniques, especially in longer sequence tasks. However, its performance in environments with high levels of stochasticity seems to be an area of potential improvement. While optimizing the timing of replanning via likelihood function to enhance the planning success rate is a valuable insight, it doesn't drastically alter my initial impression. Nonetheless, I truly appreciate your efforts in clarifying the methodology. | Summary: This paper introduces Replanning with Diffusion Models (RDM), which utilizes an internally estimated likelihood of the current plan to determine when to perform replanning. The authors propose various strategies for replanning in different scenarios.
Strengths: 1. The introduction of Replanning with Diffusion Models (RDM) and the use of an internally estimated likelihood of the current plan for replanning strategies is novel.
2. The experimental results demonstrate impressive performance in robot control and stochastic environments, surpassing baselines like IQL.
Weaknesses: 1. The validation of the proposed method is not fully comprehensive. In RDM, a key aspect is the estimation of likelihood, but the authors did not discuss the accuracy of the estimation or its impact on final performance.
2. The paper contains some language issues and typos that should be carefully reviewed. For example, in line 178, it should refer to Figure 7 instead of Table 7. Additionally, in line 37, "we propose a principled approach to xxx" has a grammatical error.
3. In the robot control experiments using the RLBench domain, comparing IQL or DT as baselines might not be sufficient. It would be beneficial to compare against other agents known for performing well in the RLBench domain.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. I am concerned about the computational resource requirements introduced by the Diffusion Models. For achieving similar performance levels, could you provide insights into the training time and resources needed for IQL, DT, and RDM?
2. While I understand the advantages of replanning, which reduce unnecessary exploration, in RL learning, accurate judgments cannot be made without visiting certain states. How can you ensure that the states not visited after replanning are indeed redundant for learning?
3. The direct introduction of noise into the D4RL dataset (Sec. 4.2) seems unreasonable. Why not collect data directly from random environments to validate this point?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations:
N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear Reviewer,
We thank the reviewer for the comments and insightful suggestions.
**Q1. Estimation of likelihood.**
> In RDM, a key aspect is the estimation of likelihood, but the authors did not discuss the accuracy of the estimation or its impact on final performance.
The accuracy of the estimation of likelihood depends on the number of diffusion steps to compute the likelihood. We run 3 diffusion steps in the experiment. In the table below, we compare the performance under different diffusion steps of computing. We find that 3 steps are sufficient to estimate the likelihood accurately. While using more diffusion steps indeed improves the normalized returns, the improvement is marginal.
Table B. Comparison under different diffusion steps of computing likelihood.
| Different diffusion steps | 1 | 3 (Ours) | 9 | 15 |
| --------- | --------- | --------- | --------- | --------- |
| Normalized Returns | 179.1 (±4.9) | 185.4 (±3.0) | 187.0 (±4.0) | 187.2 (±2.2) |
**Q2. RLBench baselines.**
> It would be beneficial to compare against other agents known for performing well in the RLBench domain.
Thanks for the suggestion. Note that our method plans using preliminary actions in RLBench. As we have discussed in Line 233-238 in the paper, most other algorithms evaluated on RLBench use *micro-steps*, which do not plan using preliminary actions and cannot be directly compared with our method. To the best of our knowledge, IQL and DT are state-of-the-art offline RL algorithms that plan using preliminary actions in RLBench. So we use them as baselines in our paper.
**Q3. Training time.**
> For achieving similar performance levels, could you provide insights into the training time and resources needed for IQL, DT, and RDM?
Thanks for the suggestion. We list the training time in the table below and each model is trained with only one Tesla V100 GPU. It's hard to report the training time with exactly the same performance level. So we instead report the performance under similar training times in the table below. Our model with 4 hours of training time still outperforms IQL and DT.
Table C. Training Time of all models on Close Box.
| Model | IQL | DT | RDM* | RDM |
|-----------------|------|-------|-----------|--------------|
| Training Time (hours) | 3.3 | 4 | 4 | 8 |
| Performance | 10.3 | 7.8 | 58.3 (±8.3)| 61.5 (±7.7) |
\* We use an earlier checkpoint that has a similar computation time to the baseline algorithms.
**Q4. Exploration.**
> How can you ensure that the states not visited after replanning are indeed redundant for learning?
Thanks for the insightful question. We can agree that in general, we can not ensure that the states not visited will be redundant for learning. One way to integrate exploration into our planning procedure is to measure the uncertainty of the state in a similar way to RND [1] and then determine the timing for replanning based on both the likelihood and the uncertainty of the unvisited states. We leave it as future work and will add this to the discussion of the paper.
[1] Yuri Burda, Harrison Edwards, Amos Storkey, and Oleg Klimov. Exploration by random network distillation. arXiv preprint arXiv:1810.12894, 2018.
**Q5. Stochastic Environments.**
> The direct introduction of noise into the D4RL dataset (Sec. 4.2) seems unreasonable. Why not collect data directly from random environments to validate this point?
We believe that in practice, in many environments, there is unexpected noise in control, due to either an inaccurate controller or external environment perturbances and believe that the addition of stochasticity into the D4RL environment serves to represent this. We are happy to run additional evaluations on an environment with random environments in the final version of the paper.
**Q6. Language issues and typos.**
> For example, in line 178, it should refer to Figure 7 instead of Table 7. Additionally, in line 37, "we propose a principled approach to xxx" has a grammatical error.
Thanks for pointing these out. We will address them in the revision.
---
Rebuttal Comment 1.1:
Comment: Dear reviewer kWoD,
Thank you again for your comments and suggestions on our paper. We hope that our responses and new results have addressed your questions and concerns. We still have a few days left in the discussion period. If you have any further questions, please don't hesitate to let us know and we'll be happy to address them. Thank you!
Best,
Authors | Rebuttal 1:
Rebuttal: We thank all the reviewers for taking the time to review our paper and providing insightful and detailed feedback. We appreciate that the reviewers recognize the following contributions.
* **Significance of our problem**. Replanning is a crucial problem in many planning problems.
> Online re-planning is a natural and important complement to many planning methods. (Reviewer G16t)
>
> Replanning is important for robots to execute trajectories robustly. (Reviewer rt81)
>
> This is a potentially useful and interesting observation. (Reviewer bzBf)
>
> The topic of when and how to replan is an important one. (Reviewer XopQ)
* **Methodology**. Our approach to determining when and how to replan is novel and compelling.
> As far as I know, the proposed methods are novel. (Reviewer G16t)
>
> Replanning strategies is novel. (Reviewer kWoD)
>
> The ideas of trying to infer when and how to replan seem novel. (Reviewer bzBf)
>
> The methodology section appears to be sound, and compelling. (Reviewer XopQ)
* **Performance**. Our approach brings significant improvement.
> The proposed method significantly improves the performance of diffusion planners. (Reviewer G16t)
>
> The experimental results demonstrate impressive performance surpassing baselines like IQL. (Reviewer kWoD)
>
> There are some promising aspects to the experiments. (Reviewer bzBf)
* **Presentation**. Some reviewers acknowledge the clear presentation of the paper.
> The overall writing quality is good, and the core idea is presented well. (Reviewer rt81)
>
> The paper is clear, well-written and easy to follow. (Reviewer XopQ)
We want to emphasize again that the **main contributions** of our work are as follows.
* We propose a principled and novel approach to determine when and how a diffusion model should replan in planning tasks.
* We empirically validate the effectiveness of our algorithm in various domains, use comprehensive ablative studies to justify the design of our algorithm, and demonstrate that our algorithm outperforms other state-of-the-art offline RL algorithms in these domains.
In the rebuttal, we address the reviewers' questions and concerns by providing the following new results in our responses and in the rebuttal PDF as well as reviewer-specific comments in the per reviewer response.
**Comprehensive evaluations of the baseline algorithms on more domains.**
* **Replanning-based Baselines**. We show the results of the comparison between our RDM algorithm and the replanning-based baseline algorithms (SDM and PRDM) across more tasks. We run experiments using different intervals or thresholds and measure their computational costs by the total number of diffusion steps. In the table below, we choose one task from each of the three domains and report the total diffusion steps and the performance, and show more evaluation results in Figure I-V in the rebuttal PDF. Notably, RDM consistently outperforms all the baseline algorithms under the same total diffusion steps (that is, with the same computational budget).
Table A. The total diffusion steps and performance.
|Environment | Method | Total Diffusion Steps | Normalized Returns |
| ----------------------------------------- | -------- | --------------------- | ------------------ |
| Maze2D Large | Diffuser | 2304.0 | 165.9 (±3.8) |
| | SDM | 1925.1 | 175.0 (±5.3) |
| | PRDM | 1916.4 | 169.3 (±1.4) |
| | RDM | 1894.38 | 185.4 (±3.0) |
| Hopper Medium-Expert | DD | 17100 | 47.4 (±3.6) |
| | SDM | 18500 | 54.4 (±3.5) |
| | PRDM | 16200 | 57 (±4.3) |
| | RDM | 15900 | 59.7 (±3.8) |
| Close Box | DD | 1968 | 46.0 (±7.5) |
| | SDM | 1813 | 52.4 (±6.6) |
| | PRDM | 1653 | 50 (±7.2) |
| | RDM | 1600 | 63.5 (±7.0) |
**Comprehensive ablative studies for diffusion-based algorithms.**
* **Different Intervals for Replanning**. We investigate different intervals $I$ for replanning for Diffuser and Decision-Diffuser. The results are shown in Figure VI and VII. We observe that as the interval decreases (that is replanning is done more frequently), the performance improves as we expect. However, when the interval is smaller than a certain value, the performance decreases significantly. For example, in the Maze2D Large domain shown in Figure VI, the return increases when the interval decreases from $250$ to $100$, while it drops when the interval decreases from $100$ to $1$. The ablation results confirm our statement that replanning with an interval that is too small (for example, replanning at every time step) may prevent successful task execution.
* **Different Thresholds for Replanning**. We also analyze the impact of different thresholds $l$ for the baseline algorithms, SDM and RDM. The results are shown in Figure VIII-XI. As we expect, when $l$ decreases (that is, when replanning is done more frequently), the performance of all the methods has improved. Despite the performance differences of the baseline algorithms, we want to emphasize again that as shown in Figure I-V, under the same computational budget, our RDM algorithm outperforms all the baseline algorithms.
Please see our detailed responses to all the reviewers below.
Pdf: /pdf/ba9355d7707bf38683c7dac0f24f3c96bc1c3164.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: This paper studies how to effectively replan with diffusion models. The authors propose an adaptive online replanning strategy using diffusion models. This strategy employs the estimated likelihood of a plan's success to determine when replanning is needed, avoiding frequent, computationally expensive replanning. It also ensures new plans align with the original trajectory's goal, leveraging previously generated plans efficiently. This method led to a 38% performance improvement over previous diffusion planning approaches on Maze2D and enabled the handling of stochastic and long-horizon robotic control tasks.
Main Contributions:
1. A method to determine when to replan with diffusion
2. A method to generate new plans while utilizing the existing plan
Strengths: - **Movitation**: Online re-planning is a natural and important complement to many planning methods, thus the motivation of this work is important.
- **Methodology**: This paper proposes two methods: one is used to determine when to replan while the other one is for how to replan. The two methods worked together to conduct effective online replanning with diffusion models. As far as I know, the proposed methods are novel.
- **Performance Improvement**: The proposed method significantly improves the performance of diffusion planners, with a reported 38% gain over past diffusion planning approaches on Maze2D. And the authors also showcased that the proposed method enable the handling of stochastic and long-horizon robotic control tasks.
Weaknesses: - It is still unclear to me why replaning at every time step does not work. In the figure 7, the Dcesion Diffuser replans at fixed intervals and it performs much worse than the proposed method. However, the authors did not mention how large is the interval is. If the replaning frequency is very low than it should apprently works worse than the proposed method. And in the abstract, the author mentioned that "replanning at each timestep may prevent successful task execution, as different generated plans prevent consistent progress to any particular goal". I am not convinced by this argument without any exmaples or further explanations.
- It is also unclear to me why "Replan on State Distance (SDM)" works so badly according to Figure 7. In Section 3.1, the authors mentioned that "However, in many cases, even if the actual state the agent is in matches with the state in the plan, new information about the environment may make the current plan infeasible." This does not make too much sense to me. I appreciate it if the authors could provide further details about this point.
- Computation budget is an important factor to consider when comparing different methods. (If we have an unlimited computation budget, I feel it is a good idea to replan at each timestep.) However, the numbers of replanning used in each baseline are not present in the paper. If the numbers of replanning vary a lot among different methods, then I would not think it is a fair comparison. I wish the authors could provide the numbers of replanning used in each method.
- Although the proposed method aims to avoid frequent, computationally expensive replanning, it's not clear how computationally efficient the new method is, and whether the computational cost of determining when to replan offsets the benefits gained from less frequent replanning.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: - It would be great to provide the number of "replan from scratch" and the number of "replan with future context" used in each experiment, since this would help the audience further understand how the replanning helps the agent.
- Why does "replan with future context" work better than "replan from previous context"? Could you give any intuitive explanations?
- See the weakness section for other questions.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: - This method only applies to diffusion models as it adds noises at some diffusion steps. It is unclear how to utilize this method in other planning methods.
- See the weakness section for other limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear Reviewer,
We appreciate the reviewer for the detailed comments and insightful suggestions.
**Q1. About replanning at every time step.**
> It is still unclear to me why replaning at every time step does not work.
We investigate different intervals $I$ for replanning for Diffuser and Decision-Diffuser. The results are shown in Figure VI and VII. We observe that as the interval decreases (that is replanning is done more frequently), the performance improves as we expect. However, when the interval is smaller than a certain value, the performance decreases significantly. For example, in the Maze2D Large domain shown in Figure VI, the return increases when the interval decreases from $250$ to $100$, while it drops when the interval decreases from $100$ to $1$. The ablation results confirm our statement that replanning with an interval that is too small (for example, replanning at every time step) may prevent successful task execution.
**Q2. Performance of SDM.**
> It is also unclear to me why "Replan on State Distance (SDM)" works so badly according to Figure 7.
Thanks for the question. In our rebuttal PDF, we compare RDM and SDM in most tasks in Figure I-V and observe that RDM performs better than SDM given the same total diffusion steps. The reason might be that the distance is only computed on a state level, not on a trajectory level. For example, in Figure 1 of the paper, the agent follows the initial plan perfectly. However, it finds the first door inaccessible only when it reaches in front of the door. Although the current observed state is close to the planned state, it still needs to replan as the current plan is infeasible.
Another example is shown in Figure 5. The agent tries to open a box, but it fails to open the box although it follows the initial plan perfectly. In this case, the state distance between the current state and the planned state is still small. If we use Replan on State Distance (SDM), the agent would not replan. On the other hand, RDM computes the likelihood based on the current environment observation and finds that the likelihood of the original plan is low (since the box doesn't open). It will replan although the current state appears similar to the planned state.
**Q3. Computation budget (the number of replanning).**
> I wish the authors could provide the numbers of replanning used in each method.
> It would be great to provide the number of "replan from scratch" and the number of "replan with future context" used in each experiment
Thanks for the suggestion. We show the results of the comparison between our RDM algorithm and the replanning-based baseline algorithms (SDM and PRDM) across more tasks. We run experiments using different intervals or thresholds and measure their computational costs by the total number of diffusion steps. In the table below, we choose one task from each of the three domains and report the total diffusion steps and the performance, and show more evaluation results in Figure I-V in the rebuttal PDF. Notably, RDM consistently outperforms all the baseline algorithms under the same total diffusion steps (that is, with the same computational budget).
Table A. The total diffusion steps and performance.
|Environment | Method | Total Diffusion Steps | Normalized Returns |
| ----------------------------------------- | -------- | --------------------- | ------------------ |
| Maze2D Large | Diffuser | 2304.0 | 165.9 (±3.8) |
| | SDM | 1925.1 | 175.0 (±5.3) |
| | PRDM | 1916.4 | 169.3 (±1.4) |
| | RDM | 1894.38 | 185.4 (±3.0) |
| Hopper Medium-Expert | DD | 17100 | 47.4 (±3.6) |
| | SDM | 18500 | 54.4 (±3.5) |
| | PRDM | 16200 | 57 (±4.3) |
| | RDM | 15900 | 59.7 (±3.8) |
| Close Box | DD | 1968 | 46.0 (±7.5) |
| | SDM | 1813 | 52.4 (±6.6) |
| | PRDM | 1653 | 50 (±7.2) |
| | RDM | 1600 | 63.5 (±7.0) |
**Q4. Replan with future context.**
> Why does "replan with future context" work better than "replan from previous context"? Could you give any intuitive explanations?
Thanks for the question. "Replan from preview context" serves as a baseline method to confirm that, during the execution of a plan, replanning by conditioning on the past states does not help generate better plans. On the other hand, including only the future states starting from the current state to the input to the diffusion model (our design choice in "replan with future context") generates the best plan. The reason might be that past states can be distracting to the diffusion model and do not help generate a better plan.
**Q5. Limitations.**
> This method only applies to diffusion models as it adds noises at some diffusion steps. It is unclear how to utilize this method in other planning methods.
Our replanning strategy, based off likelihood, can be applied to any diffusion-based planner, which has seen a variety of different applications across different planning settings such as robotics policies [1] and video [2]. Broadly, we believe the idea of likelihood-based replanning can also be applied to many other likelihood-based planning methods (e.g. trajectory transformer[3]).
[1] Cheng Chi, Siyuan Feng, Yilun Du, Zhenjia Xu, Eric Cousineau, Benjamin Burchfiel, Shuran Song. Diffusion Policy: Visuomotor Policy Learning via Action Diffusion. arXiv preprint arXiv:2303.04137, 2023.
[2] Yilun Du, Mengjiao Yang, Bo Dai, Hanjun Dai, Ofir Nachum, Joshua B. Tenenbaum, Dale Schuurmans, Pieter Abbeel. Learning Universal Policies through Text-Conditioned Video Generation. arXiv preprint arXiv:2302.00111, 2023.
[3] Michael Janner, Qiyang Li, and Sergey Levine. Reinforcement learning as one big sequence modeling problem. arXiv preprint arXiv:2106.02039, 2021.
---
Rebuttal Comment 1.1:
Comment: Dear reviewer G16t,
Thank you again for your comments and suggestions on our paper. We hope that our responses and new results have addressed your questions and concerns. We still have a few days left in the discussion period. If you have any further questions, please don't hesitate to let us know and we'll be happy to address them. Thank you!
Best,
Authors
---
Rebuttal Comment 1.2:
Title: Thanks for the rebuttal!
Comment: Thank the authors for the rebuttal, and some of my concerns have been addressed. See the follow-up questions below.
- **About replanning at every time step:** What happens if the interval is very small? Can you provide an intuitive example?
- **Computation cost of determining when to replan:** This is the 4th point I raised in my initial review, I wish the authors could provide some details about it.
---
Reply to Comment 1.2.1:
Title: Thanks for your response
Comment: Dear Reviewer:
Thank you for the insightful questions.
**Q1. About replanning at every time step.**
We visualized the agent's trajectories under different replanning intervals in Figure XIII in the rebuttal PDF. We can see that if the replanning interval is very small (in Figure XIII \(c\), the agent replans at every time step), the agent appears "staggering" in the environment. This is because the plans generated by different environmental steps may be inconsistent. This observation confirms that replanning at every time step would prevent consistent progress towards the goal, which results in a worse performance.
**Q2. Computation cost of determining when to replan.**
The computation cost of RDM depends on the number of diffusion steps for planning and replanning. The number of diffusion steps for planning and replanning is about $2 \times 10^4$ while the number for determining when to replan is about $3 \times 10^3$ in stochastic environments. And for long-horizon planning, the numbers are about $2 \times 10^3$ for planning and replanning and $2 \times 10^2$ for determining when to replan.
Kindly let us know if our responses have addressed your questions and concerns, and if you have further questions. Thank you!
Best,
Authors | null | null | null | null | null | null |
Finite-Time Analysis of Single-Timescale Actor-Critic | Accept (poster) | Summary: The paper studies finding the optimal policy in an infinite-horizon average-reward MDP with an online, sample-based algorithm. The state-of-the-art algorithm in this setting uses two different timescales and is known to have a sample complexity of $\widetilde{\mathcal{O}}(\epsilon^{-2.5})$. This paper shows that a single-timescale version of the actor-critic algorithm enjoys an improved complexity of $\widetilde{\mathcal{O}}(\epsilon^{-2})$, up to a logarithm factor. If we were always able to sample according to the stationary distribution under the current policy, the logarithm factor can be removed.
Strengths: Being able to improve the convergence rate from $\widetilde{\mathcal{O}}(\epsilon^{-2.5})$ to $\widetilde{\mathcal{O}}(\epsilon^{-2})$ is a pretty significant contribution. As the authors noted, this rate already matches the rate of standard SGD for non-convex functions. The paper is also well-written.
Weaknesses: While the presentation of the paper is mostly clear, more discussion how exactly the authors improved the convergence rate and why previous works failed to do that is needed for the audience to appreciate the technical contribution. In fact, many existing works on two-timescale AC do not start off trying to the make updates two-timescale. Making the step sizes for the actor and critic decay at the same rate is allowed, but doing so hurts their convergence rates. Do the authors take advantage of any other structure of the MDP beyond general non-convexity to make a single timescale more favorable?
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: - Related work section is somewhat limited. The connection of the paper to existing works on single-loop AC algorithms including the following ones should be discussed.
Hong, M., Wai, H.T., Wang, Z. and Yang, Z., 2023. A two-timescale stochastic algorithm framework for bilevel optimization: Complexity analysis and application to actor-critic. SIAM Journal on Optimization, 33(1), pp.147-180.
Zeng, S., Doan, T.T. and Romberg, J., 2021. A two-time-scale stochastic optimization framework with applications in control and reinforcement learning. arXiv preprint arXiv:2109.14756.
Khodadadian, S., Doan, T.T., Romberg, J. and Maguluri, S.T., 2022. Finite sample analysis of two-time-scale natural actor-critic algorithm. IEEE Transactions on Automatic Control.
Zhou, M. and Lu, J., 2022. Single Time-scale Actor-critic Method to Solve the Linear Quadratic Regulator with Convergence Guarantees. arXiv preprint arXiv:2202.00048.
- I do not feel that the discussion of i.i.d. sampling case and the theorem adds much to the paper. The claimed contribution on Markovian sampling also seems a bit oversold. Many existing works have shown how Markovian samples can be handled at the cost of introducing a log factor, which does not appear if the samples are i.i.d.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: # Thanks for reviewing our paper!
**(W1: On general MDP)** Thanks for your comments. In this paper, we only consider the general MDP with a non-convex objective function. The single-timescale approach is superior to the two-timescale approach because the latter updates the actor slower than the critic. Therefore, there is a delay in learning and inefficient usage of data in the two-timescale approach. Considering other structures of MDP to validate the superiority of single-timescale actor-critic would be interesting future work.
**(Q1: On related reference)** Thanks for sharing the relevant references. We will discuss them adequately.
**(Q2: On the discussion of i.i.d. sampling)** Thanks for your comments. We agree that the discussion of the i.i.d. sampling case does not add much to the paper. The i.i.d. case is included for comparison to show that the log factor is caused by the Markovian samples.
---
Rebuttal Comment 1.1:
Comment: I would like to thank the authors for the response. My most important question is, what exactly is the improvement in analysis made by the authors that enhanced the convergence rate and why prior works failed to do that. As I commented above, many existing works on two-timescale AC do not start off trying to the make updates two-timescale. Making the step sizes for the actor and critic decay at the same rate is allowed, but doing so hurts their convergence rates.
---
Reply to Comment 1.1.1:
Comment: Thanks for your comment. We are afraid that there is a misunderstanding of the implication of the stepsizes' timescale. For single-timescale algorithms, the step sizes for the actor and critic decay at the same rate, hence they update at approximately the same speed. For two-timescale algorithms, the actor stepsize decays faster than critic stepsize, hence actor updates much slower than critic asymptotically. Due to this artificial slowing down of the **actor** update in the two-timescale approaches, single-timescale approaches typically converge faster than two-timescale approaches [Oleshevsky45\& Gharesifard, 2023, Paragraph 5 of Introduction]. As we also mentioned in our initial comments, "there is a delay in learning and an inefficient usage of data in the two-timescale approach", because the actor updates with a small stepsize at each sample. Many existing works consider the two-timescale approach because it's easier to analyze compared with the single-timescale approach, but not because it converges fast. Hope this clarifies our response. | Summary: The work studies the actor-critic algorithm’s convergence under the single-timescale update where the step-size of actor and critic variables are only proportional by a constant. Authors suggest the epsilon-optimal solution with a sample complexity $\tilde{O}(\epsilon^{-2})$ under standard assumptions and $O(\epsilon^{-2})$ under i.i.d. sampling.
Strengths: This paper tries to analyze the sample complexity of the AC method under more practical Markovian sampling, where the transition tuples are generated from a single trajectory. The theoretical problem is well-motivated.
The paper proposes a new analysis framework for the algorithm and establishes the sample complexity of $O(\epsilon^{-2})$ which matches the best existing sample complexity for the single-timescale AC algorithm, and a sample complexity of $\tilde{O}(\epsilon^{-2})$ without i.i.d. assumption.
Weaknesses: After removing the assumption of independence and identically distributed (i.i.d.) data, the author introduces a new assumption, namely that the Markov chain is geometrically mixing and the rollout state distribution rapidly approximates the stationary distribution (assumption 3.2). I am curious whether this assumption, although slightly weaker than the i.i.d. assumption, can still be considered strong, as it may not be commonly applicable in practical scenarios. Furthermore, this assumption appears to conflict with the statement "the transition tuples are generated from a single trajectory."
In the final results, the author requires a condition of $T>2\tau$ to ensure that the obtained samples are sufficiently close to samples from an i.i.d. stationary distribution.
Numerous studies have already examined actor-critic methods and achieved the same sample complexity of $O(\epsilon^{-2})$. The author claimed that their proposed methods can be utilized without relying on the i.i.d. assumption and achieve a slightly inferior complexity of $\tilde{O}(\epsilon^{-2})$ under alternative assumptions. It is hard to judge the contribution in this situation since it is natural to obtain a worse result under milder assumptions.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: The author asserts that their methods are capable of addressing continuous state situations, even when the state space is infinite. However, it is unclear whether the analyses can also be extended to infinite action spaces. Could the author provide some clarification and analyses on this aspect?
Since the paper obtain the same results of $O(\epsilon^{-2})$ with previous works, could the author discuss and compare their results more specifically? (e.g., the constant comparison or simple control experiments)
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The paper still needs many strong assumptions to analyze the problem and it is very understandable due to the theoretical nature of the work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: # Thanks for reviewing our paper!
**(W1: On single trajectory and condition $T>2\tau_T$)** Thanks for your comment. We are afraid that there is a misunderstanding concerning our work. Assumption 3.2 does not conflict with the statement "the transition tuples are generated from a single trajectory." As shown in Algorithm 1, all transition tuples are generated from a single trajectory. In the final results, $T$ is the total iteration number and $T>2\tau_T$ means that the theorem holds for large $T$. Each state-action pair $(s_t,a_t)$ is sampled from a single trajectory online consecutively rather than waiting for at least $2\tau_T$. One of the challenges in our analysis is exactly due to such a parsimonious online sampling, because it does not guarantee the sample distributes sufficiently close to the stationary distribution.
**(Q1: On infinite action space)** Our work is capable of addressing continuous state space but not infinite action space. We stated in Line 123 that "We consider a finite action space, whereas the state
space can be either a finite set or an (unbounded) real vector space". To the best of our efforts, it remains challenging to investigate the infinite action space setting. One of the key obstacles we have identified is that if the action space is infinite, one cannot ensure the target critic $\omega^\ast(\theta)$ is still Lipschitz continuous (Lemma B.3). We leave this challenge setting for our future work.
**(Q2: On the comparison with previous works)** We compare our work with two important previous results. We improve the work of [Chen et al., 2021] to the Markovian sample and we additionally can show convergence for the critic. We improve the work of [Oleshevsky $\And$ Gharesifard, 2023] to the infinite state space setting and Markovian sample setting. The reader is referred to the second part of Main Contribution for a detailed comparison.
---
Rebuttal Comment 1.1:
Comment: Thank author for detailed explanation on weakness 1, I have some misunderstanding before. And I totally understand that dealing with continuous action is quite challenging. Thus I'm willing to increase my score to a 6.
---
Reply to Comment 1.1.1:
Comment: Thank you for recognizing our contribution! | Summary: This paper studies the actor-critic algorithm with linear function approximation. The authors provide a single-time-scale analysis for the AC and achieve $\epsilon^{-2}$ sample complexity.
Strengths: - The paper is overall well written and easy to follow
- The single-time-scale analysis with markovian noise for actor-critic is new in the literature, to my best knowledge.
- The policy gradient norm error analysis is interesting
Weaknesses: - I'm confused about Assumption 3.4, I wonder how $L_\mu$ contributes to the final results.
- Assumption 3.1, though the authors mentioned a few paper using the same assumption, sounds pretty strong for me. It basically requires even the optimal policy should visit all state-action pairs, which is quite uncommon in practice.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: My concerns are listed in the weakness part.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: NA
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: # Thanks for reviewing our paper!
**(Q1: On the contribution of $L_\mu$ to the final results)** Thanks for seeking clarification. In the current analysis, our final results are linear to the parameter $L_\mu$. To keep $L_\mu$ in the final results, the convergence rate in Theorem 3.5 can be kept as $\mathcal{O}(L_\mu\frac{\log^2 T}{\sqrt{T}})$.
**(Q2: On Assumption 3.1)** Although Assumption 3.1 is standard in theoretical analysis, we do understand that it may not be satisfied easily in practice. Note that the insight on full state-action exploration is a sufficient condition that guarantees Assumption 3.1. The converse may not be true. In addition, here the condition is presented as stronger than necessary for the sake of convenience: we actually only need the condition holds for those $\theta$ that are actually visited during learning, not for all. But such more accurate conditions are difficult to characterize analytically. | Summary: This paper provides a finite-time analysis of single-timescale, single-sample, average-reward actor-critic with a linear critic under Markovian sampling. It is established that the standard scheme achieves $\epsilon$-approximate stationarity with $\widetilde{O}(\epsilon^{-2})$ sample complexity. The analysis is achieved by extending the small-gain analysis of [A. Oleshevsky & B. Gharesifard, _A small-gain analysis of single-timescale actor-critic_. SICON 2023], which was for the discounted setting, to the average reward setting. This is accomplished by leveraging a uniform ergodicity assumption (Assumption 3.2, line 202) and associated analysis (like, e.g., that of [Y. F. Wu, W. Zhang, P. Xu, Q. Gu, _A finite-time analysis of two time-scale actor-critic methods_. NeurIPS, 2020]) to handle the Markovian sampling that becomes particularly important in the average-reward setting. It is furthermore claimed that the results apply to problems with continuous state spaces. The sample complexity achieved matches the state of the art for finite-time analyses of actor-critic methods, yet applies to the single-timescale, single-sample, average-reward, Markovian sampling setting, which was previously an open question.
Strengths: As stated in the summary, this work establishes that single-timescale, single-sample, average-reward actor-critic under Markovian sampling achieves $\epsilon$-stationarity with $\widetilde{O}(\epsilon^{-2})$ sample complexity. This shows that this version of actor-critic matches the state of the art, resolving what was a previously an open question. Despite the analysis presented in the appendix being notation-heavy and somewhat difficult to follow, and though there are issues in the exposition and assumptions that need to be addressed (see weaknesses and questions below), the main steps in the theoretical results appear to be sound. The analysis appears to build off of the small-gain analysis of [Oleshevsky & Gharesifard, 2023], combined with elements of previous two-timescale analyses for average-reward actor-critic with the uniform ergodicity assumption such as [Wu et al., 2020], as well as some algebraic manipulations of inequalities. Though of uncertain practical utility, this contribution is definitely of interest to the theoretical reinforcement learning community.
Weaknesses: I have some concerns about: (i) clarity about the innovation required in the analysis; (ii) the satisfiability of the assumptions made; and (iii) the dependence of the main result presented in Step 4 in lines 331-339 on having access to potentially unknown problem-specific constants when designing stepsizes. Addressing these issues or being clear about any limitations will likely strengthen the paper.
(i) As stated above, the analysis appears to build off of [Oleshevsky & Gharesifard, 2023] to handle the interconnectedness of the single-timescale analysis, elements of two-timescale analyses like [Wu et al., 2020] to handle Markovian sampling, as well as algebraic manipulations of inequalities to combine it all together. However, it is unclear from the main body whether the analysis is a straightforward (albeit complicated) combination of these existing methods, or if some critical new insight and innovation are required. If the former, the significance of the contribution is weakened. If the latter, it should be stated more clearly. Also, the abstract claims that the analysis applies to continuous state spaces, and on lines 73-74 it is stated that achieving this requires "significantly non-trivial effort in the analysis". It is not clear where in the analysis this non-trivial effort takes place, however. If serious additional effort was required, it should be clearly stated exactly where it occurs.
(ii) Two of the assumptions call into doubt the applicability of the result to continuous state spaces. On lines 198-199, it is stated that Assumption 3.1 holds under certain conditions in the tabular case, but applicability to the continuous state space case is not mentioned. The reader wonders: does Assumption 3.1 hold when $|S| = \infty$? In addition, lines 220-221 state that Assumption 3.4 holds for the finite state-action space setting. Again: what about when $|S| = \infty$? Since the analysis relies on these assumptions, it must be shown that they hold in the continuous state space setting for the corresponding claims in the abstract and introduction to be true. Finally, though uniform ergodicity (Assumption 3.2) is commonly assumed in average-reward analyses, the recent two-timescale analysis of [W. A. Suttle, A. S. Bedi, B. Patel, B. M. Sadler, A. Koppel, D. Manocha, _Beyond exponentially fast mixing in average-reward reinforcement learning via multi-level monte carlo actor-critic_. ICML 2023] removes the need for this assumption. It would be helpful to see a discussion of why Assumption 3.2 is still required in the single-timescale analysis or how it might be eliminated.
(iii) On lines 334-337, it is pointed out that, for Step 4 (cf. lines 331-339) to work, a certain set of inequalities (line 335) needs to be satisfied. It is stated that this condition can be satisfied by choosing the stepsize ratio $c$ to be smaller than some threshold. On line 556 in the appendix, this threshold is given, but it involves the problem-specific constants $\lambda, L_*, G, B$. It thus appears necessary that we have oracle knowledge or that an additional estimation procedure is required for the specific stepsize scheme to be operable.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: * What is the key innovation required in the analysis?
* What is the effort required in accommodating continuous state spaces?
* Under what conditions do Assumptions 3.1 and 3.4 hold for continuous state spaces?
* Why is Assumption 3.2 necessary? Or can it be eliminated?
* How do we choose the stepsize ratio $c$ in practice?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Aside from the weaknesses and questions described above, the authors have adequately addressed the limitations of the work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: # Thanks for reviewing our paper!
**(Q1: Key innovation in the analysis)**
Thanks for seeking clarification. The key innovation lies in conducting a comprehensive analysis of each error term. In two-timescale actor-critic [Wu et al., 2020], convergence is deduced relying on multiplying the error term by the diminishing stepsize ratio $\frac{\alpha_t}{\beta_t} \rightarrow 0$ as $t\rightarrow \infty $. However, in the single-timescale setting, this ratio is constant $\frac{\alpha_t}{\beta_t}=c$, requiring a more in-depth investigation and tighter analysis of each error term to establish convergence. These lead to non-trivial proofs that are distinct from the two main references, and we highlighted them in the proof sketch.
Taking the error term $y_t(J(\theta_t)-J(\theta_{t+1}))$ for example, the two-timescale work bounds it using the $L_J$-Lipschitz continuity of $J(\theta)$. In the single-timescale approach, solely relying on Lipschitz continuity is still too loose. To overcome this, we delve deeper and discover that the gradient of $J(\theta)$ also possesses Lipschitz continuity (Lemma B.2), which means $J(\theta)$ is $L_{J'}$-smoothness. With this new insight, we successfully bound the term $y_t(J(\theta_t)-J(\theta_{t+1}))$ with $\sqrt{Y_TG_T}$, thereby establishing a solvable interconnected system. Together with many other careful analyses, we achieve much tighter bounds and are able to establish convergence of the more challenging single time-scale algorithm.
**(Q2: On the effort required in accommodating continuous state spaces)**
We note that moving from a finite state space to an infinite state space takes significant and nontrivial efforts in analysis. This is due to the fact that some established results need to rely on intrinsic problem constants such as many Lipschitz constants that rely on the finite size of the state space ($|\mathcal{S}|$), which however becomes infinite in the infinite state space scenario. Additionally, existing analysis concatenates all state-action pairs to create a finite-dimensional feature matrix and often requires summation over all states [Oleshevsky $\And$ Gharesifard, 2023]. These analyses will not be possible when the state space is uncountable. Moreover, some convenient properties no longer exist in the continuous state space setting. For example, the uniform boundedness of Hessian of the problem matrix $A_\theta$ no longer holds ([Oleshevsky $\And$ Gharesifard, 2023], Lemma 4.8). These fundamental challenges require completely different analysis techniques.
**(Q3: On Assumptions 3.1 and 3.4)** In fact, Assumption 3.1 is a fundamental regularity condition, which is often made to guarantee the problem’s solvability in the linear function approximation case and on continuous state space [Wu et al., 2020].
For Assumption 3.4, $\nabla_\theta \mu_\theta(s)$ is a Jacobian matrix such that its $d$th column is the gradient of the $d$th action dimension of the policy with respect to the policy parameters $\theta$. Assumption 3.4 is equivalent to $L_\mu$-smoothness of the stationary state distribution. It's difficult to characterize a general condition under which this assumption holds. But there are cases for which it holds, for example, linear systems with linear state feedback policy.
**(Q4: On Assumption 3.2)** Assumption 3.2 can not be eliminated for the analyzed algorithm. The provided reference analyzes a multi-level Monte Carlo actor-critic which notably samples $2^{j_t}$ state-action pairs for each policy $\pi_{\theta_t}$ update. Taking the average of these multi-samples effectively reduces the statistical error (deviation from stationary distribution). However, we analyze the most general and fundamental **single-sample** single-timescale online actor-critic, which means for each policy update, we only sample one state-action pair. Consequently, the introduction of Assumption 3.2 becomes imperative in order to characterize the disparity between the distribution of the Markovian sample and its corresponding stationary distribution.
**(Q5: On choosing the stepsize ratio in practice)** Due to the theoretical nature of the work, we characterize an upper bound of the stepsize ratio $c$ depending on unknown problem-specific constants, which is a sufficient condition for convergence. In practice, one can choose a relatively small $c$ to guarantee convergence, just like in solving many machine learning problems using SGD methods, people typically choose a small stepsize to have a better convergence guarantee and performance.
---
Rebuttal Comment 1.1:
Title: Clarification request: continuous spaces
Comment: Thanks very much for the responses. In the abstract on line 6 and contributions on lines 73-74, you mention that you target the continuous spaces setting. In your response to (Q2) above, you have nicely outlined the **challenges** inherent in the continuous space setting, but I still don't understand how you **overcame** those challenges.
Two questions:
1. Does your analysis handle the continuous spaces setting?
2. Can you point out specific locations (e.g., with sections or line numbers) in the analysis where you made "significantly non-trivial effort" to handle this setting? Also, can you explain how these efforts are completely different from previously used techniques?
---
Reply to Comment 1.1.1:
Title: Thanks for seeking clarification!
Comment: **(Q1: On continuous **state** space setting)** Thanks for your careful reading and efforts to help us elucidate our contribution. In both places pointed out in your comments, we had highlighted that we consider continuous **state** space setting. Indeed, our analysis can handle the continuous state space setting.
**(Q2: On the effort to handle continuous setting)** Thanks for seeking clarification. [Oleshevsky $\And$ Gharesifard, 2023] can only handle finite state spaces. Their analysis heavily relies on the construction of a finite-dimensional feature matrix $\Phi$ whose column dimension is equal to the number of all state-action pairs. In particular, many key properties and bounds were established relying on this matrix. For example, they establish the implicit upper bound for critic error (Lemma 5.14) based on the Lipschitzness of actor update (Lemma 5.5) and the uniform boundedness of Hessian of the problem matrix $A_\theta$ (Lemma 5.8). These properties no longer hold once moving onto the continuous state space setting. Basically, all their proofs of deriving the implicit bounds are inapplicable to the continuous state space case.
In contrast, our approach directly establishes the implicit bounds of $Y_T$, $Z_T$, and $G_T$ (see Theorem C.2, Theorem C.5, and Theorem C.7, respectively) through a series of novel characterization of error terms that are completely different from [Oleshevsky $\And$ Gharesifard, 2023]. For example, our implicit upper bound for critic error (Theorem C.5) was established using the Lipschitzness of critic target $\omega^\ast$ (Lemma B.3) and the $L_s$-smoothness of $\omega^\ast$ (Lemma B.4), which do not depend on the finite state space assumption. Moreover, our characterization controls the critic error (referred to as $Z_T$ in our paper) by the product of both critic error and reward estimation error (referred to as $Y_T$, detailed in Line 508, equation (18)), which is a key step to guarantee convergence, and more importantly, it holds under continuous state space. Similarly, we developed closer coupling of all three errors in all three implicit bounds, which eventually enables the establishment of convergence. Overall, our main efforts and technical contribution lie in the proper decomposition and bounding of various error terms that eventually result in a solvable interconnected system. The interconnected system formulation may seem incremental as compared to [Oleshevsky $\And$ Gharesifard, 2023] if without diving into the detailed proof, but we would like to emphasize that the true challenge and difficulty lie in how to characterize those implicit bounds for critic, actor, and reward estimation errors that constitute the interconnected system under a more challenging continuous state space and Markovian sampling setting.
We hope the above better clarifies our contribution. | null | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: This works provides finite sample analysis for single timescale actor critic.
Strengths: A finite sample analysis for single timescale actor critic under Markovian noise and infinite state space is definitely a notable contribution to the community.
Weaknesses: My biggest concern is the correctness of this work. This work is a resubmission from ICML 2023. In the ICML round, all reviewers agree that this is a good paper until one expert reviewer that seems to really be in the field pointed out one critical technical error in the proof, leading to the rejection. But I unfortunately do not have access to the ICML reviews now so I will recommend "rejection" for now. If the authors provide the following information in the rebuttal, I am more than happy to evaluate the work again.
1. What errors did the reviewer pointed out in the ICML round (it would help a lot if a copy of the original review can be provided, assuming the authors still have access to it)?
2. Was the reviewer wrong or the authors were wrong?
3. If the authors were wrong, how is the bug fixed in this version? If the reviewer was wrong, what mistake did the reviewer make?
It would help a lot if the authors could answer the above questions in details and in a self-contained way.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: See above
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: See above
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: # Thanks for reviewing our paper!
**(Q1: What error was pointed out in the ICML submission)** Thanks for reviewing our work again. We are glad to show our refined results. The original comment in the ICML round is: "However, the authors utilize a very quick but wrong inequality to extend the analysis technique in [1] to inifite state space case: When analyzing the interconnected system (line 1122), the term "$bZ_T$" should be corrected to "$b\sqrt{Z_T}$" according to equation (18)
. This is a quite nontrivial mistake. Since the term "b" is of constant order (line 1112) under the infinite state space setting, the convergence rate of
will NOT be $\widetilde{\mathcal{O}}(1/\sqrt{T})+\mathcal{O}(T)$
. Instead, it will be an absolute constant independent of $T$
if we apply the authors' trick of Young's inequality. As a consequence, the whole remaining proof should be wrong." So the error occurred at Line 1122 (now is fixed at Line 557), where according to equation (18) (now still equation (18)), the correct term should be $\sqrt{Z_T}$ but we wrongly took it as $Z_T$ and performed the subsequential analysis. Now we have fixed this careless mistake and proved the results correctly.
**(Q2: Who was wrong)** We were wrong. During the rebuttal, we realized that simply choosing appropriate stepsizes cannot fix the bug. We then have done a major revision after the rebuttal.
**(Q3: How we fixed the bug in NeurIPS submission)** We have developed a new proof under an additional Assumption 3.4 of the $L_s$-smoothness property of the critic target $\omega^\ast(\theta)$, where a different and tighter system of inequalities was developed. The smoothness property is used to bound the term $\langle z_t, \omega^\ast(\theta_t)-\omega^\ast(\theta_{t+1})\rangle$, which tracks both the critic estimation performance $z_t$ and the difference between the drifting critic targets $\omega^\ast(\theta_t)$. In particular, we bound it by $\sqrt{Z_T(2Y_T+8Z_T)}$ (see Theorem C.5). Then we were able to derive a solvable interconnected system, the solution of which leads to our main results. In the ICML version, we erroneously applied a crude bound of $\sqrt{Z_T}$ to bound the same term. However, we mistakenly treated $\sqrt{Z_T}$ as $Z_T$ in our attempts to solve the interconnected system. Indeed, if it were $Z_T$, the system is unsolvable and cannot show convergence as pointed out by the ICML reviewer.
---
Rebuttal Comment 1.1:
Title: Assumption too strong
Comment: I am afraid Assumption 3.4 is way too strong. [8] does have such an assumption. But I quote from [8]
> Assumption 11 is the counterpart of Assumption 10 that is made for the stationary distribution μθ(a|s). Note that the existence of ∇μθ(s) has been shown in [2]. In this case, under Assumption 10, i) and iii) of Assumption 11 can be obtained from the sensitivity analysis of Markov chain; see e.g., [32, Theorem 3.1]. While we cannot provide a justification of (ii), we found it necessary to ensure the smoothness of the lower-level critic solution y∗(θ).
[8] does **NOT** provide a justification for the smoothness of the stationary distribution.
The argument the authors make is "This assumption holds for the finite state-action space setting". I would like to see a proof, since [8] does not prove this.
I think it's ok for [8] to use this assumption because RL is merely 1 of their 4 applications of a more general result, but RL is all of this work.
---
Reply to Comment 1.1.1:
Comment: Thanks for your comment. The proof of Assumption 3.4 under finite state-action space setting can be found in Lemma 14 of [1]. We forgot to cite this reference and will include it in our revised version.
[1] Qijun Luo and Xiao Li. Finite-time analysis of fully decentralized single-timescale actor-critic. arXiv preprint arXiv:2206.05733, 2022 | null | null | null | null | null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.